Search Results

Search found 10417 results on 417 pages for 'large'.

Page 332/417 | < Previous Page | 328 329 330 331 332 333 334 335 336 337 338 339  | Next Page >

  • bxSlider-4 text slide pass into the next slide

    - by Martialp
    I use http://bxslider.com/ to slide some content, just simple text. But it seem to have a problem with the text, not the image. I post a simple live exemple : http://jsfiddle.net/Sbt75/324/ As you can see on the exemple, we see the previous text from the previous slide on the left of the active slide. <div class="row"> <div class="large-6 columns"> <ul class="bxslider"> <li><p>Lorem ipsum dolor sit amet, consectetur adipisicing elit. Accusantium, obcaecati, laudantium, blanditiis, adipisci quod eaque porro sapiente eligendi dicta voluptates voluptatum sunt aperiam totam modi quis vitae maxime! Dolor, possimus.</p></li> <li>Lorem ipsum dolor sit amet, consectetur adipisicing elit. Odit, recusandae, delectus amet ipsa voluptate tempora architecto ad blanditiis officia perspiciatis nesciunt at ducimus quas nihil fuga. Qui optio minima accusamus?</li> <li><p class="right"> il etait une fois un grand mechant loupqui s'appelet jean et qui aimer courir dans l'herbe avec une grande harpe pour jouer dvant les enfants Lorem ipsum dolor sit amet, consectetur adipisicing elit. Odit, recusandae, delectus amet ipsa voluptate tempora architecto ad blanditiis officia perspiciatis nesciunt at ducimus quas nihil fuga. Qui optio minima accusamus? </p> </li> </ul> </div> </div> $(document).ready(function(){ $('.bxslider').bxSlider({ auto: false, autoControls: false, nextSelector: '#slider-next', prevSelector: '#slider-prev', nextText: 'Onward ?', prevText: '? Go back' }); });

    Read the article

  • Caching sitemaps in django

    - by michuk
    I implemented a simple sitemap class using django's default sitemap app. As it was taking a long time to execute, I added manual caching: class ShortReviewsSitemap(Sitemap): changefreq = "hourly" priority = 0.7 def items(self): # try to retrieve from cache result = get_cache(CACHE_SITEMAP_SHORT_REVIEWS, "sitemap_short_reviews") if result!=None: return result result = ShortReview.objects.all().order_by("-created_at") # store in cache set_cache(CACHE_SITEMAP_SHORT_REVIEWS, "sitemap_short_reviews", result) return result def lastmod(self, obj): return obj.updated_at The problem is that memcache allows only max 1MB object. This one was bigger that 1MB, so storing into cache failed: >7 SERVER_ERROR object too large for cache The problem is that django has an automated way of deciding when it should divide the sitemap file into smalled ones. According to the docs (http://docs.djangoproject.com/en/dev/ref/contrib/sitemaps/): You should create an index file if one of your sitemaps has more than 50,000 URLs. In this case, Django will automatically paginate the sitemap, and the index will reflect that. What do you think would be the best way to enable caching sitemaps? - Hacking into django sitemaps framework to restrict a single sitemap size to, let's say, 10,000 records seems like the best idea. Why was 50,000 chosen in the first place? Google advice? random number? - Or maybe there is a way to allow memcached store bigger files? - Or perhaps onces saved, the sitemaps should be made available as static files? This would mean that instead of caching with memcached I'd have to manually store the results in the filesystem and retrieve them from there next time when the sitemap is requested (perhaps cleaning the directory daily in a cron job). All those seem very low level and I'm wondering if an obvious solution exists...

    Read the article

  • How to determine which source files are required for an Eclipse run configuration

    - by isme
    When writing code in an Eclipse project, I'm usually quite messy and undisciplined in how I create and organize my classes, at least in the early hacky and experimental stages. In particular, I create more than one class with a main method for testing different ideas that share most of the same classes. If I come up with something like a useful app, I can export it to a runnable jar so I can share it with friends. But this simply packs up the whole project, which can become several megabytes big if I'm relying on large library such as httpclient. Also, if I decide to refactor my lump of code into several projects once I work out what works, and I can't remember which source files are used in a particular run configuration, all I can do it copy the main class to a new project and then keep copying missing types till the new project compiles. Is there a way in Eclipse to determine which classes are actually used in a particular run configuration? EDIT: Here's an example. Say I'm experimenting with web scraping, and so far I've tried to scrape the search-result pages of both youtube.com and wrzuta.pl. I have a bunch of classes that implement scraping in general, a few that are specific to each of youtube and wrzuta. On top of this I have a basic gui common to both scrapers, but a few wrzuta- and youtube-specific buttons and options. The WrzutaGuiMain and YoutubeGuiMain classes each contain a main method to configure and show the gui for each respective website. Can Eclipse look at each of these to determine which types are referenced?

    Read the article

  • NHibernate IQueryable Collection as Property of Root

    - by Khalid Abuhakmeh
    Hello and thank you for taking the time to read this. I have a root object that has a property that is a collection. For example : I have a Shelf object that has Books. // now public class Shelf { public ICollection<Book> Books {get; set;} } // want public class Shelf { public IQueryable<Book> Books {get;set;} } What I want to accomplish is to return a collection that is IQueryable so that I can run paging and filtering off of the collection directly from the the parent. var shelf = shelfRepository.Get(1); var filtered = from book in shelf.Books where book.Name == "The Great Gatsby" select book; I want to have that query executed specifically by NHibernate and not a get all to load a whole collection and then parse it in memory (which is what currently happens when I use ICollection). The reasoning behind this is that my collection could be huge, tens of thousands of records, and a get all query could bash my database. I would like to do this implicitly so that when NHibernate sees and IQueryable on my class it knows what to do. I have looked at NHibernates Linq provider and currently I am making the decision to take large collections and split them into their own repository so that I can make explicit calls for filtering and paging. Linq To SQL offers something similar to what I'm talking about.

    Read the article

  • RANDOM Number Generation in C

    - by CVS-2600Hertz-wordpress-com
    Recently i have begun development of a simple game. It is improved version of an earlier version that i had developed. Large part of the game's success depends on Random number generation in different modes: MODE1 - Truly Random mode myRand(min,max,mode=1); Should return me a random integer b/w min & max. MODE2 - Pseudo Random : Token out of a bag mode myRand(min,max,mode=2); Should return me a random integer b/w min & max. Also should internally keep track of the values returned and must not return the same value again until all the other values are returned atleast once. MODE3 - Pseudo Random : Human mode myRand(min,max,mode=3); Should return me a random integer b/w min & max. The randomisation ought to be not purely mathematically random, rather random as user perceive it. How Humans see RANDOM. * Assume the code is time-critical (i.e. any performance optimisations are welcome) * Pseudo-code will do but an implementation in C is what i'm looking for. * Please keep it simple. A single function should be sufficient (thats what i'm looking for) Thank You

    Read the article

  • Quickest algorithm for finding sets with high intersection

    - by conradlee
    I have a large number of user IDs (integers), potentially millions. These users all belong to various groups (sets of integers), such that there are on the order of 10 million groups. To simplify my example and get to the essence of it, let's assume that all groups contain 20 user IDs (i.e., all integer sets have a cardinality of 100). I want to find all pairs of integer sets that have an intersection of 15 or greater. Should I compare every pair of sets? (If I keep a data structure that maps userIDs to set membership, this would not be necessary.) What is the quickest way to do this? That is, what should my underlying data structure be for representing the integer sets? Sorted sets, unsorted---can hashing somehow help? And what algorithm should I use to compute set intersection)? I prefer answers that relate C/C++ (especially STL), but also any more general, algorithmic insights are welcome. Update Also, note that I will be running this in parallel in a shared memory environment, so ideas that cleanly extend to a parallel solution are preferred.

    Read the article

  • Check if files in a directory are still being written using Windows Batch Script

    - by FMFF
    Hello. Here's my batch file to parse a directory, and zip files of certain type REM Begin ------------------------ tasklist /FI "IMAGENAME eq 7za.exe" /FO CSV > search.log FOR /F %%A IN (search.log) DO IF %%~zA EQU 0 GOTO end for /f "delims=" %%A in ('dir C:\Temp\*.ps /b') do ( "C:\Program Files\7-Zip\cmdline\7za.exe" a -tzip -mx9 "C:\temp\Zip\%%A.zip" "C:\temp\%%A" Move "C:\temp\%%A" "C:\Temp\Archive" ) :end del search.log REM pause exit REM End --------------------------- This code works just fine for 90% of my needs. It will be deployed as a scheduled task. However, the *.ps files are rather large (minimum of 1GB) in real time cases. So the code is supposed to check if the incoming file is completely written and is not locked by the application that is writing it. I saw another example elsewhere, that suggested the following approach :TestFile ren c:\file.txt c:\file.txt if errorlevel 0 goto docopy sleep 5 goto TestFile :docopy However this example is good for a fixed file. How can I use that many labels and GoTo's inside a for loop without causing an infinite loop? Or is this code safe to be used in the For Loop? Thank you for any help.

    Read the article

  • mysql: Average over multiple columns in one row, ignoring nulls

    - by Sai Emrys
    I have a large table (of sites) with several numeric columns - say a through f. (These are site rankings from different organizations, like alexa, google, quantcast, etc. Each has a different range and format; they're straight dumps from the outside DBs.) For many of the records, one or more of these columns is null, because the outside DB doesn't have data for it. They all cover different subsets of my DB. I want column t to be their weighted average (each of a..f have static weights which I assign), ignoring null values (which can occur in any of them), except being null if they're all null. I would prefer to do this with a simple SQL calculation, rather than doing it in app code or using some huge ugly nested if block to handle every permutation of nulls. (Given that I have an increasing number of columns to average over as I add in more outside DB sources, this would be exponentially more ugly and bug-prone.) I'd use AVG but that's only for group by, and this is w/in one record. The data is semantically nullable, and I don't want to average in some "average" value in place of the nulls; I want to only be counting the columns for which data is there. Is there a good way to do this? Ideally, what I want is something like UPDATE sites SET t = AVG(a*a_weight,b*b_weight,...) where any null values are just ignored and no grouping is happening.

    Read the article

  • Vertically center a fluid image in a fluid container

    - by Ferdy
    I certainly do not want to add to the pile of vertical alignments CSS questions, but I've spent hours trying to find a solution to no avail yet. Here's the situation: I am building a slideshow gallery of images. I want the images to be displayed as large as the user's window allows. So I have this outer placeholder: <section class="photo full"> (Yes, I'm using HTML5 elements). Which has the following CSS: section.photo.full { display:inline-block; width:100%; height:100%; position:absolute; overflow:hidden; text-align:center; } Next, the image is placed inside it. Depending on the orientation of the image, I set either the width or height to 75%, and the other axis to auto: $wide = $bigimage['width'] >= $bigimage['height'] ? true: false; ?> <img src="<?= $bigimage['url'] ?>" width="<?= $wide? "75%" : "auto"?>" height="<?= $wide? "auto" : "75%"?>"/> So, we have a fluid outer container, with inside a fluid image. The horizontal centering of the image works, yet I cannot seem to find a way to vertically center the image within it's container. I have researched centering methods but most assume either the container or image has a known width or height. Then there is the display:table-cell method, which does not seem to work for me either. I'm stuck. I'm looking for a CSS solution, but am open to js solutions too.

    Read the article

  • Is their a definitive list for the differences between the current version of SQL Azure and SQL Serv

    - by Aim Kai
    I am a relative newbie when it comes to SQL Azure!! I was wondering if there was a definitive list somewhere regarding what is and is not supported by SQL Azure in regards to SQL Server 2008? I have had a look through google but I've noticed some of the blog posts are missing things which I have found through my own testing: For example, quite a lot is summarised in this blog entry http://www.keepitsimpleandfast.com/2009/12/main-differences-between-sql-azure-and.html Common Language Runtime (CLR) Database file placement Database mirroring Distributed queries Distributed transactions Filegroup management Global temporary tables Spatial data and indexes SQL Server configuration options SQL Server Service Broker System tables Trace Flags which is a repeat of the MSDN page http://msdn.microsoft.com/en-us/library/ff394115.aspx I've noticed from my own testing that the following seem to have issues when migrating from SQL Server 2008 to the Azure: XML Types (the msdn does mention large custom types - I guess it may include this?? even if the data schema is really small?) Multi-part views I've been using SQL Azure Migration Wizard v3.1.8 to migrate local databases into the cloud. I was wondering if anyone could point to a list or give me any information till when these features are likely to be included in SQL Azure.

    Read the article

  • FLV performance and garbage collection

    - by justinbach
    I'm building a large flash site (AS3) that uses huge FLVs as transition videos from section to section. The FLVs are 1280x800 and are being scaled to 1680x1050 (much of which is not displayed to users with smaller screens), and are around 5-8 seconds apiece. I'm encoding the videos using On2's hi-def codec, VP6-S, and playback is pretty good with native FLV players, Perian-equipped Quicktime, and simple proof-of-concept FLV playback apps built in AS3. The problem I'm having is that in the context of the actual site, playback isn't as smooth; the framerate isn't quite as good as it should be, and more problematically, there's occasional jerkiness and dropped frames (sometimes pausing the video for as long as a quarter of a second or so). My guess is that this is being caused by garbage collection in the Flash player, which happens nondeterministically and is therefore hard to test and control for. I'm using a single instance of FLVPlayback to play the videos; I originally was using NetStream objects and so forth directly but switched to FLVPlayback for this reason. Has anyone experienced this sort of jerkiness with FLVPlayback (or more generally, with hi-def Flash video)? Am I right about GC being the culprit here, and if so, is there any way to prevent it during playback of these system-intensive transitions?

    Read the article

  • Collecting high-volume video viewing data

    - by DanK
    I want to add tracking to our Flash-based media player so that we can provide analytics that show what sections of videos are being watched (at the moment, we just register a view when a video starts playing) For example, if a viewer watches the first 30 seconds of a video and then clicks away to something else, we want the data to reflect that. Likewise, if someone watches the first 10 seconds, then scrubs the timeline to the last minute of the video and watches that, we want to register viewing on the parts watched and not the middle section. My first thought was to collect up the viewing data in the player and send it all to the server at the end of a viewing session. Unfortunately, Flash does not seem to have an event that you can hook into when a viewer clicks away from the page the movie is on (probably a good thing - it would be open to abuse) So, it looks like we're going to have to make regular requests to the server as the video is playing. This is obviously going to lead to a high volume of requests when there are large numbers of simultaneous viewers. The simple approach of dumping all these 'heartbeat' events from clients to a database feels like it will quickly become unmanageable so I'm wondering whether I should be taking an approach where viewing sessions are cached in memory and flushed to database when they become inactive (based on a timeout). That way, the data could be stored as time spans rather than individual heartbeats. So, to the question - what is the best way to approach dealing with this kind of high-volume viewing data? Are there any good existing architectures/patterns? Thanks, Dan.

    Read the article

  • Maintaining a pool of DAO Class instances vs doing new operator

    - by Fazal
    we have been trying to benchmark our application performance in multiple way for sometime now. I always believed that object creation in java using Class.newInstance() was not slow (at least after 1.4 version of java). But we anyways did a test to use newInstance method vs mainitain an object pool of 1000 objects. We did about 200K iterations of loading data from DB using JDBC and populating these objects. I was amazed (even shocked) to see that newInstance code compared to object pool code was almost 10 times slower. These objects represent tables with about 50 fields and all string type. Can someone share there thoughts on this issue as now I am more confused if object pooling of atleast some DAO instances is a better option. The pool size as I see right now should be large enough to meet size of average requests. There is a flip side as my memory footprint will go up but I am beginning to wonder if this kind of idea makes sense atleast for some of the DAO entities representing tables of about 50 or more columns Please share your ideas and let me know if this has been tried by someone or am I missing some point here

    Read the article

  • Generic object to object mapping with parametrized constructor

    - by Rody van Sambeek
    I have a data access layer which returns an IDataRecord. I have a WCF service that serves DataContracts (dto's). These DataContracts are initiated by a parametrized constructor containing the IDataRecord as follows: [DataContract] public class DataContractItem { [DataMember] public int ID; [DataMember] public string Title; public DataContractItem(IDataRecord record) { this.ID = Convert.ToInt32(record["ID"]); this.Title = record["title"].ToString(); } } Unfortanately I can't change the DAL, so I'm obliged to work with the IDataRecord as input. But in generat this works very well. The mappings are pretty simple most of the time, sometimes they are a bit more complex, but no rocket science. However, now I'd like to be able to use generics to instantiate the different DataContracts to simplify the WCF service methods. I want to be able to do something like: public T DoSomething<T>(IDataRecord record) { ... return new T(record); } So I'd tried to following solutions: Use a generic typed interface with a constructor. doesn't work: ofcourse we can't define a constructor in an interface Use a static method to instantiate the DataContract and create a typed interface containing this static method. doesn't work: ofcourse we can't define a static method in an interface Use a generic typed interface containing the new() constraint doesn't work: new() constraint cannot contain a parameter (the IDataRecord) Using a factory object to perform the mapping based on the DataContract Type. does work, but: not very clean, because I now have a switch statement with all mappings in one file. I can't find a real clean solution for this. Can somebody shed a light on this for me? The project is too small for any complex mapping techniques and too large for a "switch-based" factory implementation.

    Read the article

  • Which single IoC/DI container would you recommend using and why?

    - by Rob G
    I'm asking this question because it's a good way to gauge how the community at large feels about the various containers/frameworks and why. Also, whilst my expertise may lie in .Net development, I am very interested in which frameworks are popular (and why) in other languages. If I feel the need to start digging into Java for instance, then I'd like to hit the ground running with good (comfortable) knowledge that I'm starting in the right direction. Does Ruby even need one with all its magnificent dynamicism? I have my own opinions on the .Net front, and will probably add my own personal favourite in an answer below, but I'm interested in all languages and opinions here. With all that in mind, could you please state only one IoC/DI framework that you use and recommend with the language of choice (Java/Ruby/.Net/Smalltalk etc.) and your reasoning for your choice, and if someone has already answered your particular flavour, then you can just vote it up and add comments to it so that anyone looking for advice in future and see which frameworks are more than likely to work for them once they read your reasoning. I'm hoping that over time, the best ones will bubble up to the top. I realise that this question doesn't have only one correct answer, so I won't be choosing one - the community will decide which framework gets the most votes and why. Of course, if you really feel strongly opposed to a particular brand, you could take the reputation hit and vote it down too, and this question can serve as a true wiki-style entry for research into this field. Remember, only one IoC per answer you write please - if you feel the need to promote two frameworks, then write two answers with your reasoning inside for each choice - then others in the community can agree or disagree with you.

    Read the article

  • How to Manage CSS Explosion

    - by Jason
    I have been heavily relying on CSS for a website that I am working on (currently, everything is done as property values within each tag on the website and I'm trying to get away from that to make updates significantly easier). The problem I am running into, is I'm starting to get a bit of "CSS explosion" going on. It is becoming difficult for me to decide how to best organize and abstract data within the CSS file. For example: I am using a large number of div tags within the website (previously it was completely tables based). So I'm starting to get a lot of CSS that looks like this... div.title { background-color: Blue; color: White; text-align: center; } div.footer { /* Stuff Here */ } div.body { /* Stuff Here */ } etc. It's not too bad yet, but since I am learning here, I was wondering if recommendations could be made on how best to organize the various parts of a CSS file. What I don't want to get to is where I have a separate CSS attribute for every single thing on my website (which I have seen happen), and I always want the CSS file to be fairly intuitive. (P.S. I do realize this is a very generic, high-level question. My ultimate goal is to make it easy to use the CSS files and demonstrate their power to increase the speed of web development so other individuals that may work on this site in the future will also get into the practice of using them rather than hard-coding values everywhere.)

    Read the article

  • Self-describing file format for gigapixel images?

    - by Adam Goode
    In medical imaging, there appears to be two ways of storing huge gigapixel images: Use lots of JPEG images (either packed into files or individually) and cook up some bizarre index format to describe what goes where. Tack on some metadata in some other format. Use TIFF's tile and multi-image support to cleanly store the images as a single file, and provide downsampled versions for zooming speed. Then abuse various TIFF tags to store metadata in non-standard ways. Also, store tiles with overlapping boundaries that must be individually translated later. In both cases, the reader must understand the format well enough to understand how to draw things and read the metadata. Is there a better way to store these images? Is TIFF (or BigTIFF) still the right format for this? Does XMP solve the problem of metadata? The main issues are: Storing images in a way that allows for rapid random access (tiling) Storing downsampled images for rapid zooming (pyramid) Handling cases where tiles are overlapping or sparse (scanners often work by moving a camera over a slide in 2D and capturing only where there is something to image) Storing important metadata, including associated images like a slide's label and thumbnail Support for lossy storage What kind of (hopefully non-proprietary) formats do people use to store large aerial photographs or maps? These images have similar properties.

    Read the article

  • MySQL efficiency as it relates to the database/table size

    - by mlissner
    I'm building a system using django, Sphinx and MySQL that's very quickly becoming quite large. The database currently has about 2000 rows, and I've written a program that's going to populate it with another 40,000 rows in a couple days. Since the database is live right now, and since I've never had a database with this much information in it, I'm worried about some things: Is adding all these rows going to seriously degrade the efficiency of my django app? Will I need to go back through it and optimize all my database calls so they're doing things more cleverly? Or will this make the database slow all around to the extent that I can't do anything about it at all? If you scoff at my 40k rows, then, my next question is, at what point SHOULD I be concerned? I will likely be adding another couple hundred thousand soon, so I worry, and I fret. How is sphinx going to feel about all this? Is it going to freak out when it realizes it has to index all this data? Or will it be fine? Is this normal for it? If it is, at what point should I be concerned that it's too much data for Sphinx? Thanks for any thoughts.

    Read the article

  • Algorithm possible amounts (over)paid for a specific price, based on denominations

    - by Wrikken
    In a current project, people can order goods delivered to their door and choose 'pay on delivery' as a payment option. To make sure the delivery guy has enough change customers are asked to input the amount they will pay (e.g. delivery is 48,13, they will pay with 60,- (3*20,-)). Now, if it were up to me I'd make it a free field, but apparantly higher-ups have decided is should be a selection based on available denominations, without giving amounts that would result in a set of denominations which could be smaller. Example: denominations = [1,2,5,10,20,50] price = 78.12 possibilities: 79 (multitude of options), 80 (e.g. 4*20) 90 (e.g. 50+2*20) 100 (2*50) It's international, so the denominations could change, and the algorithm should be based on that list. The closest I have come which seems to work is this: for all denominations in reversed order (large=>small) add ceil(price/denomination) * denomination to possibles baseprice = floor(price/denomination) * denomination; for all smaller denominations as subdenomination in reversed order add baseprice + (ceil((price - baseprice) / subdenomination) * subdenomination) to possibles end for end for remove doubles sort Is seems to work, but this has emerged after wildly trying all kinds of compact algorithms, and I cannot defend why it works, which could lead to some edge-case / new countries getting wrong options, and it does generate some serious amounts of doubles. As this is probably not a new problem, and Google et al. could not provide me with an answer save for loads of pages calculating how to make exact change, I thought I'd ask SO: have you solved this problem before? Which algorithm? Any proof it will always work?

    Read the article

  • Using a Javascript Variable & Sending to JSON

    - by D Franks
    Hello all! I'm trying to take a URL's hash value, send it through a function, turn that value into an object, but ultimately send the value to JSON. I have the following setup: function content(cur){ var mycur = $H(cur); var pars = "p="+mycur.toJSON(); new Ajax.Updater('my_box', 'test.php', { parameters: pars }); } function update(){ if(window.location.hash.length > 0){ content(window.location.hash.substr(1)); // Everything after the '#' } } var curHashVal = window.location.hash; window.onload = function(){ setInterval(function(){ if(curHashVal != window.location.hash){ update(); curHashVal = window.location.hash; } },1); } But for some reason, I can't seem to get the right JSON output. It will either return as a very large object (1:"{",2:"k") or not return at all. I doubt that it is impossible to accomplish, but I've exhausted most of the ways I can think of. Other ways I've tried were "{" + cur + "}" as well as cur.toObject(), however, none seemed to get the job done. Thanks for the help!

    Read the article

  • Best way to store list of numbers and to retrieve them

    - by bingoNumbers
    Hi. What is the best way to store a list of random numbers (like lotto/bingo numbers) and retrieve them? I'd like to store on a Database a number of rows, where each row contains 5-10 numbers ranging from 0 to 90. I will store a big number of those rows. What I'd like to be able is to retrieve the rows that have at least X number in common to a newly generated row. Example: [3,4,33,67,85,99] [55,56,77,89,98,99] [3,4,23,47,85,91] Those are on the DB I will generate this: [1,2,11,45,47,88] and now I want to get the rows that have at least 1 number in common with this one. The easiest (and dumbest?) way is to make 6 select and check for similar results. I thought to store numbers with a large binary string like 000000000000000000000100000000010010110000000000000000000000000 with 99 numbers where each number represent a number from 1 to 99, so if I have 1 at the 44th position, it means that I have 44 on that row. This method is probably shifting the difficult tasks to the Db but it's again not very smart. Any suggestion?

    Read the article

  • Why use bin2hex when inserting binary data from PHP into MySQL?

    - by Atli
    I heard a rumor that when inserting binary data (files and such) into MySQL, you should use the bin2hex() function and send it as a HEX-coded value, rather than just use mysql_real_escape_string on the binary string and use that. // That you should do $hex = bin2hex($raw_bin); $sql = "INSERT INTO `table`(`file`) VALUES (X'{$hex}')"; // Rather than $bin = mysql_real_escape_string($raw_bin); $sql = "INSERT INTO `table`(`file`) VALUES ('{$bin}')"; It is supposedly for performance reasons. Something to do with how MySQL handles large strings vs. how it handles HEX-coded values However, I am having a hard time confirming this. All my tests indicate the exact oposite; that the bin2hex method is ~85% slower and uses ~24% more memory. (I am testing this on PHP 5.3, MySQL 5.1, Win7 x64 - Using a farily simple insert loop.) For instance, this graph shows the private memory usage of the mysqld process while the test code was running: Does anybody have any explainations or reasources that would clarify this? Thanks.

    Read the article

  • LinqKit stack overflow exception using predicate builder

    - by MLynn
    I am writing an application in C# using LINQ and LINQKit. I have a very large database table with company registration numbers in it. I want to do a LINQ query which will produce the equivalent SQL: select * from table1 where regno in('123','456') The 'in' clause may have thousands of terms. First I get the company registration numbers from a field such as Country. I then add all the company registration numbers to a predicate: var predicate = PredicateExtensions.False<table2>(); if (RegNos != null) { foreach (int searchTerm in RegNos) { int temp = searchTerm; predicate = predicate.Or(ec => ec.regno.Equals(temp)); } } On Windows Vista Professional a stack overflow exception occured after 4063 terms were added. On Windows Server 2003 a stack overflow exception occured after about 1000 terms were added. I had to solve this problem quickly for a demo. To solve the problem I used this notation: var predicate = PredicateExtensions.False<table2>(); if (RegNosDistinct != null) { predicate = predicate.Or(ec => RegNos.Contains(ec.regno)); } My questions are: Why does a stack overflow occur using the foreach loop? I take it Windows Server 2003 has a much smaller stack per process\thread than NT\2000\XP\Vista\Windows 7 workstation versions of Windows. Which is the fastest and most correct way to achieve this using LINQ and LINQKit? It was suggested I stop using LINQ and go back to dynamic SQL or ADO.NET but I think using LINQ and LINQKit is far better for maintainability.

    Read the article

  • Saving "heavy" figure to PDF in MATLAB - rendering problem

    - by yuk
    I generate a figure in MATLAB with lot amount of points (100000+) and want to save it into a PDF file. With zbuffer or painters renderer I've got very large and slowly opened file (over 4 Mb) - all points are in vector format. Using OpenGL renderer rasterize the figure in PDF, ok for the plot, but not good for text labels. The file size is about 150 Kb. Try this simplified code, for example: x=linspace(1,10,100000); y=sin(x)+randn(size(x)); plot(x,y,'.') set(gcf,'Renderer','zbuffer') print -dpdf -r300 testpdf_zb set(gcf,'Renderer','painters') print -dpdf -r300 testpdf_pa set(gcf,'Renderer','opengl') print -dpdf -r300 testpdf_op The actual figure is much more complex with several axes and different types of plots. Is there a way to rasterize the figure, but keep text labels as vectors? Another problem with OpenGL is that is does not work in terminal mode (-nosplash -nodesktop) under Mac OSX. Looks like OpenGL is not supported. I have to use terminal mode for automation. The MATLAB version I run is 2007b. Mac OSX server 10.4.

    Read the article

  • Service Browser for AMF calls (Flex to Java)

    - by Tehsin
    Has anyone used or is aware of a service browser to test AMF calls? I am looking for a tool similar to ZamfBrowser ( http://www.zamfbrowser.org ), but one that works for the Java environment. ZamfBrowser is geared towards AMFPHP. The idea here is to provide a service browser, that allows developers to test Java services using the AMF protocol, without having to go through the Flex UI all the time. There has got to be something out there already for this, but I can't seem to locate anything..... It's kind of funny and strange that a service browser exists for AMFPHP but not for regular AMF calls in a Java environment. I would imagine something exists under Blaze or LCDS? ... Trying to find it in the docs but can't seem to find anything .... The best alternative I can think of at the moment is to use FlexMonkey to record stuff, and then to simulate it using that....which is okay I guess but still sucks because you have to go in and create the Flex UI first, whereas with something like ZamfBrowser, you simply point it at the service calls, it tells the server-side developers if their code works, etc. generates the required as3 classes for you... and makes the integration process much easier in a large team. Any help or insight would be appreciated :) Thanks!

    Read the article

< Previous Page | 328 329 330 331 332 333 334 335 336 337 338 339  | Next Page >