Search Results

Search found 11409 results on 457 pages for 'large teams'.

Page 379/457 | < Previous Page | 375 376 377 378 379 380 381 382 383 384 385 386  | Next Page >

  • how often should the entire suite of a system's unit tests be run?

    - by gerryLowry
    Generally, I'm still very much a unit testing neophyte. BTW, you may also see this question on other forums like xUnit.net, et cetera, because it's an important question to me. I apoligize in advance for my cross posting; your opinions are very important to me and not everyone in this forum belongs to the other forums too. I was looking at a large decade old legacy system which has had over 700 unit tests written recently (700 is just a small beginning). The tests happen to be written in MSTest but this question applies to all testing frameworks AFAIK. When I ran, via vs2008 "ALL TESTS", the final count was only seven tests. That's about 1% of the total tests that have been written to date. MORE INFORMATION: The ASP.NET MVC 2 RTM source code, including its unit tests, is available on CodePlex; those unit tests are also written in MSTest even though (an irrelevant fact) Brad Wilson later joined the ASP.NET MVC team as its Senior Programmer. All 2000 plus tests get run, not just a few. QUESTION: given that AFAIK the purpose of unit tests is to identify breakages in the SUT, am I correct in thinking that the "best practice" is to always, or at least very frequently, run all of the tests? Thank you. Regards, Gerry (Lowry)

    Read the article

  • iPhone OS: Get a list of methods and variables from anonymous object

    - by ChrisOPeterson
    I am building my first iPhone/Obj-c app and I have a large amount of data-holding subclasses that I am passing into a cite function. To the cite function these objects are anonymous and I need to find a way to access all the variables of each passed object. I have been using a pre-built NSArray and Selectors to do this but with more than 30 entries (and growing) it is kind of silly to do manually. There has to be a way to dynamically look up all the variables of an anonymous object. The obj-c runtime run-time docs mention this problem but from what I can tell this is not available in iPhone OS. If it is then I don't understand the implementation and need some guidance. A similar question was asked before but again I think they were talking about OSX and not iPhone. Any thoughts? -(NSString*)cite:(id)source { NSString *sourceClass = NSStringFromClass([source class]); // Runs through all the variables in the manually built methodList for(id method in methodList) { SEL x = NSSelectorFromString(method); // further implementation // Should be something like NSArray *methodList = [[NSArray alloc] initWithObjects:[source getVariableList]] for(id method in methodList) { SEL x = NSSelectorFromString(method); // Further implementation }

    Read the article

  • Optimzing TSQL code

    - by adopilot
    My job is the maintain one application which heavy use SQL server (MSSQL2005). Until now middle server stores TSQL codes in XML and send dynamic TSQL queries without using stored procs. As I am able change those XML queries I want to migrate most of my queries to stored procs. Question is folowing: Most of my queries have same Where conditions against one table Sample: Select ..... from .... where .... and (a.vrsta_id = @vrsta_id or @vrsta_id = 0) and (a.podvrsta_id = @podvrsta_id or @podvrsta_id = 0) and (a.podgrupa_2 = @podgrupa2_id or @podgrupa2_id = 0) and ( (a.id in (select art_id from osobina_veze where podosobina_id in (select ado from dbo.fn_ado_param_int(@podosobina)) group by art_id having count(art_id)= @podosobina_count )) or ('0' = @podosobina) ) They also have same where conditions on other table. How I should organize my code ? What is proper way ? Should I make table valued function that I will use in all queries or use #Temp tables and simple inner join my query to that each time when proc executing? or use #temp filed by table valued function ? or leave all queries with this large where clause and hope that index is going to do their jobs. or use WITH(statement)

    Read the article

  • Why does Tex/Latex not speed up in subsequent runs?

    - by Debilski
    I really wonder, why even recent systems of Tex/Latex do not use any caching to speed up later runs. Every time that I fix a single comma*, calling Latex costs me about the same amount of time, because it needs to load and convert every single picture file. (* I know that even changing a tiny comma could affect the whole structure but of course, a well-written cache format could see the impact of that. Also, there might be situations where 100% correctness is not needed as long as it’s fast.) Is there something in the language of Tex which makes this complicated or impossible to accomplish or is it just that in the original implementation of Tex, there was no need for this (because it would have been slow anyway on those large computers)? But then on the other hand, why doesn’t this annoy other people so much that they’ve started a fork which has some sort of caching (or transparent conversion of Tex files to a format which is faster to parse)? Is there anything I can do to speed up subsequent runs of Latex? Except from putting all the stuff into chapterXX.tex files and then commenting them out?

    Read the article

  • How can I pare down Vim's buffer list to only include active buffers

    - by nelstrom
    How can I pare down my buffer list to only include buffers that are currently open in a window/tab? When I've been running Vim for a long time, the list of buffers revealed by the :ls command is too large to work with. Ideally, I would like to delete all of the buffers which are not currently visible in a tab or window by running a custom command such as :Only. Can anybody suggest how to achieve this? It looks like the :bdelete command can accept a list of buffer numbers, but I'm not sure how to translate the output from :ls to a format that can be consumed by the :bdelete command. Any help would be appreciated. Clarification Lets say that in my Vim session I have opened 4 files. The :ls command outputs: :ls 1 a "abc.c" 2 h "123.c" 3 h "xyz.c" 4 a "abc.h" Buffer 1 is in the current tab, and and buffer 4 is in a separate tab, but buffers 2 and 3 are both hidden. I would like to run the command :Only, and it would wipe buffers 2 and 3, so the :ls command would output: :ls 1 a "abc.c" 4 a "abc.h" This example doesn't make the proposed :Only command look very useful, but if you have a list of 40 buffers it would be very welcome.

    Read the article

  • Looking for a fast, compact, streamable, multi-language, strongly typed serialization format

    - by sanity
    I'm currently using JSON (compressed via gzip) in my Java project, in which I need to store a large number of objects (hundreds of millions) on disk. I have one JSON object per line, and disallow linebreaks within the JSON object. This way I can stream the data off disk line-by-line without having to read the entire file at once. It turns out that parsing the JSON code (using http://www.json.org/java/) is a bigger overhead than either pulling the raw data off disk, or decompressing it (which I do on the fly). Ideally what I'd like is a strongly-typed serialization format, where I can specify "this object field is a list of strings" (for example), and because the system knows what to expect, it can deserialize it quickly. I can also specify the format just by giving someone else its "type". It would also need to be cross-platform. I use Java, but work with people using PHP, Python, and other languages. So, to recap, it should be: Strongly typed Streamable (ie. read a file bit by bit without having to load it all into RAM at once) Cross platform (including Java and PHP) Fast Free (as in speech) Any pointers?

    Read the article

  • TypeScript - separating code output

    - by Andrea Baccega
    i'm trying typescript and I find it very useful. I've a quite large project and i was considering rewriting it using typescript. The main problem here is the following: file A.ts: class A extends B { // A stuff } file B.ts: class B { // B stuff } If I compile A.ts with this command: tsc --out compiledA.js A.ts I'll get error from the compiler cause he doesn't know how to threat the "B" after extends. So, a "solution" would be including in A.ts (as first line of code): /// <reference path="./B.ts" /> Compiling again A.ts with the same command tsc --out compiledA.js A.ts Will result in compiledA.js containing both B.ts and A.ts code. ( which could be very nice ) In my case, I only need to compile the A.ts code in the compiledA.js file and I don't want the B.ts stuff to be in there. Indeed, what I want is: tsc --out A.js A.ts = compile only the A.ts stuff tsc --out B.js B.ts = compile only the B.ts stuff I can do it by removing the "extends" keyword but doing that I'll loose most of the typescript goodness. Can someone telll me if there's a way to do this ?

    Read the article

  • mysql/algorithm: Weighting an average to accentuate differences from the mean

    - by Sai Emrys
    This is for a new feature on http://cssfingerprint.com (see /about for general info). The feature looks up the sites you've visited in a database of site demographics, and tries to guess what your demographic stats are based on that. All my demgraphics are in 0..1 probability format, not ratios or absolute numbers or the like. Essentially, you have a large number of data points that each tend you towards their own demographics. However, just taking the average is poor, because it means that by adding in a lot of generic data, the number goes down. For example, suppose you've visited sites S0..S50. All except S0 are 48% female; S0 is 100% male. If I'm guessing your gender, I want to have a value close to 100%, not just the 49% that a straight average would give. Also, consider that most demographics (i.e. everything other than gender) does not have the average at 50%. For example, the average probability of having kids 0-17 is ~37%. The more a given site's demographics are different from this average (e.g. maybe it's a site for parents, or for child-free people), the more it should count in my guess of your status. What's the best way to calculate this? For extra credit: what's the best way to calculate this, that is also cheap & easy to do in mysql?

    Read the article

  • Sparse (Pseudo) Infinite Grid Data Structure for Web Game

    - by Ming
    I'm considering trying to make a game that takes place on an essentially infinite grid. The grid is very sparse. Certain small regions of relatively high density. Relatively few isolated nonempty cells. The amount of the grid in use is too large to implement naively but probably smallish by "big data" standards (I'm not trying to map the Internet or anything like that) This needs to be easy to persist. Here are the operations I may want to perform (reasonably efficiently) on this grid: Ask for some small rectangular region of cells and all their contents (a player's current neighborhood) Set individual cells or blit small regions (the player is making a move) Ask for the rough shape or outline/silhouette of some larger rectangular regions (a world map or region preview) Find some regions with approximately a given density (player spawning location) Approximate shortest path through gaps of at most some small constant empty spaces per hop (it's OK to be a bad approximation often, but not OK to keep heading the wrong direction searching) Approximate convex hull for a region Here's the catch: I want to do this in a web app. That is, I would prefer to use existing data storage (perhaps in the form of a relational database) and relatively little external dependency (preferably avoiding the need for a persistent process). Guys, what advice can you give me on actually implementing this? How would you do this if the web-app restrictions weren't in place? How would you modify that if they were? Thanks a lot, everyone!

    Read the article

  • What languages should a microISV use to write commercial software?

    - by Wal
    I've been writing software in Java for many years now, but it was always for internal applications that would be deployed to a server. I'd like to get into writing desktop applications now but I don't know where to start. I've written a few Java/Swing applications but again they were for internal use. My understanding is that Java and other semi-compiled and interpreted languages are too easy to reverse engineer, making them unsuitable for commercial software. I am aware that there are compilers for Java and some other interpreted language, but I've also heard that they are pricey and/or unreliable. Assuming I start a microISV and wish to develop and sell applications to a broad audience, what's my best bet? I would prefer something that can be written close to once, and compiled for different operating systems but I am not opposed to .NET and a Windows-only audience if other languages would compromise the experience (installation ease & user experience) in Windows. My only issue there is that I don't have a large starting budget and paying out the wazoo for the required development tools is not really in the cards.

    Read the article

  • Dealing with expired session for a partially filled form?

    - by aaronls
    I have a large webform, and would like to prompt the user to login if their session expires, or have them login when they submit the form. It seems that having them login when they submit the form creates alot of challenges because they get redirected to the login page and then the postback data for the original form submission is lost. So I'm thinking about how to prompt them to login asynchrounsly when the session expires. So that they stay on the original form page, have a panel appear telling them the session has expired and they need to login, it submits the login asynchronously, the login panel disapears, and the user is still on the original partially filled form and can submit it. Is this easily doable using the existing ASP.NET Membership controls? When they submit the form will I need to worry about the session key? I mean, I am wondering if the session key the form submits will be the original one from before the session expired which won't match the new one generated after logging in again asynchrounously(I still do not understand the details of how ASP.NET tracks authentication/session IDs). Edit: Yes I am actually concerned about authentication expiration. The user must be authenticated for the submitted data to be considered valid.

    Read the article

  • Winsock WSAAsyncSelect sending without an infinite buffer

    - by Xexr
    Hi, This is more of a design question than a specific code question, I'm sure I am missing the obvious, I just need another set of eyes. I am writing a multi-client server based on WSAAsyncSelect, each connection is made into an object of a connection class I have written which contains associated settings and buffers etc. My question concerns FD_WRITE, I understand how it operates: One FD_WRITE is sent immediately after a connection is established. Thereafter, you should send until WSAEWOULDBLOCK is received at which point you store what is left to send in a buffer, and wait to be told that it is ok to send again. This is where I have a problem, how large do I make this holding buffer within each connections object? The amount of time until a new FD_WRITE is received is unknown, I could be attempting to send a lot of stuff during this period, all the time adding to my outgoing buffer. If I make the buffer dynamic, memory usage could spiral out of control if for whatever reason, I am unable to send() and reduce the buffer. So my question is how do you generally handle this situation? Note I am not talking about the network buffer itself which winsock uses, but one of my own creation used to "queue" up sends. Hope I explained that well enough, thanks all!

    Read the article

  • Android application listed as compatible with Sony Xperia S but still filtered from google play

    - by mlidal
    I have published an Android application and some users are complaining that it is listed as not compatible with Sony Xperia S. According to the developer console Xperia S (LT26i) is listed as compatible. Do anyone know of any reason why the app is still filtered from google play? I have seen people reporting problems with big apk files. This app is about 20Mb in size, with the largest file being 14Mb. Quite a bit but not enough to cause problems I think... Here is the output from aapt dump badging: package: name='no.bouvet.nrkut' versionCode='4' versionName='1.0' sdkVersion:'4' targetSdkVersion:'13' uses-permission:'android.permission.ACCESS_FINE_LOCATION' uses-permission:'android.permission.ACCESS_COARSE_LOCATION' uses-permission:'android.permission.ACCESS_WIFI_STATE' uses-permission:'android.permission.ACCESS_NETWORK_STATE' uses-permission:'android.permission.INTERNET' uses-permission:'android.permission.WRITE_EXTERNAL_STORAGE' application-label:'UT.no' application-icon-120:'res/drawable-ldpi/utno_launcher.png' application-icon-160:'res/drawable-mdpi/utno_launcher.png' application-icon-240:'res/drawable-hdpi/utno_launcher.png' application-icon-320:'res/drawable-xhdpi/utno_launcher.png' application: label='UT.no' icon='res/drawable-mdpi/utno_launcher.png' launchable-activity: name='no.bouvet.nrkut.MainActivity' label='UT.no' icon='' uses-feature:'android.hardware.location' uses-feature:'android.hardware.location.gps' uses-feature:'android.hardware.location.network' uses-feature:'android.hardware.wifi' uses-feature:'android.hardware.touchscreen' uses-feature:'android.hardware.screen.portrait' main other-activities search supports-screens: 'small' 'normal' 'large' 'xlarge' supports-any-density: 'true' locales: '--_--' densities: '120' '160' '240' '320'

    Read the article

  • Should I use MEF or Prism for my Silverlight project?

    - by Daniel
    Hi! My team(3 developers) will be building a Silverlight LOB application. This is the first Silverlight project for us. We've been doing mostly Winforms. We'll be using Silverlight4 / VS2010 / possibly WCF RIA Services, and ASP.NET Web application to handle authentication and host the silverlight pages. We need a way to.. Modularize the silverlight project so we can work in different parts of the application, then integrate them. Dynamically load different parts of the application, so the initial download size of the xap file wouldn't be too large. After some research, I found out that Prism and MEF are possible solutions to these goals. Can you give me advice on which framework to use? or possibly another solution? We don't have much experience on Silverlight and the project needs to be finished in 3 months, so the learning curves for frameworks should be considered. Thank you for reading! Any inputs will be much appreciated.

    Read the article

  • Is XMLReader a SAX parser, a DOM parser, or neither?

    - by Renesis
    I am testing various methods to read (possibly large, and very often) XML configuration files in PHP. No writing is ever needed. I have two successful implementations, one using SimpleXML (which I know is a DOM parser) and one using XMLReader. I know that a DOM reader must read the whole tree and therefore uses more memory. My tests reflect that. I also know that A SAX parser is an "event-based" parser that uses less memory because it reads each node from the stream without checking what is next. XMLReader also reads from a stream with the cursor providing data about the node it is currently at. So, it definitely sounds like XMLReader (http://us2.php.net/xmlreader) is not a DOM parser, but my question is, is it a SAX parser, or something else? It seems like XMLReader behaves the way a SAX parser does but does not throw the events themselves (in other words, can you construct a SAX parser with XMLReader?) If it is something else, does the classification it's in have a name?

    Read the article

  • Database Functional Programming in Clojure

    - by Ralph
    "It is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail." - Abraham Maslow I need to write a tool to dump a large hierarchical (SQL) database to XML. The hierarchy consists of a Person table with subsidiary Address, Phone, etc. tables. I have to dump thousands of rows, so I would like to do so incrementally and not keep the whole XML file in memory. I would like to isolate non-pure function code to a small portion of the application. I am thinking that this might be a good opportunity to explore FP and concurrency in Clojure. I can also show the benefits of immutable data and multi-core utilization to my skeptical co-workers. I'm not sure how the overall architecture of the application should be. I am thinking that I can use an impure function to retrieve the database rows and return a lazy sequence that can then be processed by a pure function that returns an XML fragment. For each Person row, I can create a Future and have several processed in parallel (the output order does not matter). As each Person is processed, the task will retrieve the appropriate rows from the Address, Phone, etc. tables and generate the nested XML. I can use a a generic function to process most of the tables, relying on database meta-data to get the column information, with special functions for the few tables that need custom processing. These functions could be listed in a map(table name -> function). Am I going about this in the right way? I can easily fall back to doing it in OO using Java, but that would be no fun. BTW, are there any good books on FP patterns or architecture? I have several good books on Clojure, Scala, and F#, but although each covers the language well, none look at the "big picture" of function programming design.

    Read the article

  • The fastest way to iterate through a collection of objects

    - by Trev
    Hello all, First to give you some background: I have some research code which performs a Monte Carlo simulation, essential what happens is I iterate through a collection of objects, compute a number of vectors from their surface then for each vector I iterate through the collection of objects again to see if the vector hits another object (similar to ray tracing). The pseudo code would look something like this for each object { for a number of vectors { do some computations for each object { check if vector intersects } } } As the number of objects can be quite large and the amount of rays is even larger I thought it would be wise to optimise how I iterate through the collection of objects. I created some test code which tests arrays, lists and vectors and for my first test cases found that vectors iterators were around twice as fast as arrays however when I implemented a vector in my code in was somewhat slower than the array I was using before. So I went back to the test code and increased the complexity of the object function each loop was calling (a dummy function equivalent to 'check if vector intersects') and I found that when the complexity of the function increases the execution time gap between arrays and vectors reduces until eventually the array was quicker. Does anyone know why this occurs? It seems strange that execution time inside the loop should effect the outer loop run time.

    Read the article

  • Is there a way to easily convert a series of tarballs of a source tree into a git repository?

    - by Hotei
    I'm new to git and I have a moderately large number of weekly tarballs from a long running project. Each tarball has on average a few hundred files in it. I'm looking for a git strategy that will allow me to add the expanded contents of each tarball to a new git repository, starting from version 1.001 and going through version 1.650. As of this stage of the project 99.5% of tarball(n) is just a copy of version(n-1) - in other words, a perfect candidate for git. The desired end result is to have only the master branch remaining at the end of the process. I think I know git well enough to do this "by hand". As I understand it there is no possibility of a merge conflict since there will be no opportunity to change the master before the next version is added and committed. A shell script is my first guess, but I'm not sure how well bash will like it when git checkout branch_n gets processed while bash is executing in branch_n-1. For the purposes of this project the host environment is Ubuntu 10.4, resources available are 8 Gig RAM, 500 Gig Disk space free and 4 CPU processor at 3.ghz . I don't need someone else to solve the problem but I could use a nudge in the right direction as to how a git expert would approach it. Any advice from someone who's "been there done that" would be appreciated. Hotei PS: I have looked at site's suggested "related questions" and found nothing relevant.

    Read the article

  • Need a way to determine if a file is done being written to.

    - by Khorkrak
    The situation I'm in is this - there's a process that's writing to a file, sometimes the file is rather large say 400 - 500MB. I need to know when it's done writing. How can I determine this? If I look in the directory I'll see it there but it might not be done being written. Plus this needs to be done remotely - as in on the same internal LAN but not on the same computer and typically the process that wants to know when the file writing is done is running on a Linux box with a the process that's writing the file and the file itself on a windows box. No samba isn't an option. xmlrpc communication to a service on that windows box is an option as well as using snmp to check if that's viable. Ideally Works on either Linux or Windows - meaning the solution is OS independent. Works for any type of file. Good enough: Works just on windows but can be done through some library or whatever that can be accessed with Python. Works only for PDF files. Current best idea is to periodically open the file in question from some process on the windows box and look at the last bytes checking for the PDF end tag and accounting for the eol differences because the file may have been created on Linux or Windows.

    Read the article

  • Destructors not called when native (C++) exception propagates to CLR component

    - by Phil Nash
    We have a large body of native C++ code, compliled into DLLs. Then we have a couple of dlls containing C++/CLI proxy code to wrap the C++ interfaces. On top of that we have C# code calling into the C++/CLI wrappers. Standard stuff, so far. But we have a lot of cases where native C++ exceptions are allowed to propagate to the .Net world and we rely on .Net's ability to wrap these as System.Exception objects and for the most part this works fine. However we have been finding that destructors of objects in scope at the point of the throw are not being invoked when the exception propagates! After some research we found that this is a fairly well known issue. However the solutions/ workarounds seem less consistent. We did find that if the native code is compiled with /EHa instead of /EHsc the issue disappears (at least in our test case it did). However we would much prefer to use /EHsc as we translate SEH exceptions to C++ exceptions ourselves and we would rather allow the compiler more scope for optimisation. Are there any other workarounds for this issue - other than wrapping every call across the native-managed boundary in a (native) try-catch-throw (in addition to the C++/CLI layer)?

    Read the article

  • Improve speed of a JOIN in MySQL

    - by ran2
    Dear all, I know there a similar threads around, but this is really the first time I realize that query speed might affect me - so it´s not that easy for me to really make the transfer from other folks problems. That being said I have using the following query successfully with smaller data, but if I use it on what are mildly large tables (about 120,000 records). I am waiting for hours. INSERT INTO anothertable (id,someint1,someint1,somevarchar1,somevarchar1) SELECT DISTINCT md.id,md.someint1,md.someint1,md.somevarchar1,pd.somevarchar1 from table1 AS md JOIN table2 AS pd ON (md.id = pd.id); Tables 1 and 2 contain about 120,000 records. The query has been running for almost 2 hours right now. Is this normal? Do I just have to wait. I really have no idea, but I am pretty sure that one could do it better since it´s my very first try. I read about indexing, but dont know yet what to index in my case? Thanks for any suggestions - feel free to point my to the very beginners guides ! best matt

    Read the article

  • How can I tackle 'profoundly found elsewhere' syndrome (inverse of NIH)?

    - by Alistair Knock
    How can I encourage colleagues to embrace small-scale innovation within our team(s), in order to get things done quicker and to encourage skills development? (the term 'profoundly found elsewhere' comes from Wikipedia, although it is scarcely used anywhere else apart from a reference to Proctor & Gamble) I've worked in both environments where there is a strong opposition to software which hasn't been developed in-house (usually because there's a large community of developers), and more recently (with far fewer central developers) where off-the-shelf products are far more favoured for the usual reasons: maintenance, total cost over product lifecycle, risk management and so on. I think the off the shelf argument works in the majority of cases for the majority of users, even though as a developer the product never quite does what I'd like it to do. However, in some cases there are clear gaps where the market isn't able to provide specifically what we would need, or at least it isn't able to without charging astronomical consultancy rates for a bespoke solution. These can be small web applications which provide a short-term solution to a particular need in one specific department, or could be larger developments that have the potential to serve a wider audience, both across the organisation and into external markets. The problem is that while development of these applications would be incredibly cheap in terms of developer hours, and delivered very quickly without the need for glacial consultation, the proposal usually falls flat because of risk: 'Who'll maintain the project tracker that hasn't had any maintenance for the past 7 years while you're on holiday for 2 weeks?' 'What if one of our systems changes and the connector breaks?' 'How can you guarantee it's secure/better/faster/cheaper/holier than Company X's?' With one developer behind these little projects, the answers are invariably: 'Nobody, but...' 'It will break, just like any other application would...' 'I, uh...' How can I better answer these questions and encourage people to take a little risk in order to stimulate creativity and fast-paced, short-lifecycle development instead of using that 6 months to consult about what tender process we might use?

    Read the article

  • specifying multiple URLs with cURL/PHP using square brackets

    - by Raj Gundu
    I have a large array of URLS similar to this: $nodes = array( 'http://www.example.com/product.php?page=1&sortOn=sellprice', 'http://www.example.com/product.php?page=2&sortOn=sellprice', 'http://www.example.com/product.php?page=3&sortOn=sellprice' ); The cURL manual states here (http://curl.haxx.se/docs/manpage.html) that i can use square brackets '[]' to specify multiple urls. Used in the above example this would be similar to this: 'http://www.example.com/product.php?page=[1-3]&sortOn=sellprice' So far i have been unable to reference this correctly. This is the complete code segment I'm currently trying to utilize this with: $nodes = array( 'http://www.example.com/product.php?page=1&sortOn=sellprice', 'http://www.example.com/product.php?page=2&sortOn=sellprice', 'http://www.example.com/product.php?page=3&sortOn=sellprice' ); $node_count = count($nodes); $curl_arr = array(); $master = curl_multi_init(); for($i = 0; $i < $node_count; $i++) { $url =$nodes[$i]; $curl_arr[$i] = curl_init($url); curl_setopt($curl_arr[$i], CURLOPT_RETURNTRANSFER, true); curl_multi_add_handle($master, $curl_arr[$i]); } do { curl_multi_exec($master,$running); } while($running > 0); echo "results: "; for($i = 0; $i < $node_count; $i++) { $results = curl_multi_getcontent ( $curl_arr[$i] ); echo( $i . "\n" . $results . "\n"); echo 'done'; I can't seem to find any more documentation on this. Thanks in advance.

    Read the article

  • Is there a limit for the number of files in a directory on an SD card?

    - by jamesh
    I have a project written for Android devices. It generates a large number of files, each day. These are all text files and images. The app uses a database to reference these files. The app is supposed to clear up these files after a little use (perhaps after a few days), but this process may or may not be working. This is not the subject of this question. Due to a historic accident, the organization of the files are somewhat naive: everything is in the same directory; a .hidden directory which contains a zero byte .nomedia file to prevent the MediaScanner indexing it. Today, I am seeing an error reported: java.io.IOException: Cannot create: /sdcard/.hidden/file-4200.html at java.io.File.createNewFile(File.java:1263) Regarding the sdcard, I see it has plenty of storage left, but counting $ cd /Volumes/NO_NAME/.hidden $ ls | wc -w 9058 Deleting a number of files seems to have allowed the file creation for today to proceed. Regrettably, I did not try touching a new file to try and reproduce the error on a commandline; I also deleted several hundred files rather than a handful. However, my question is: are there hard limits on filesize or number of files in a directory? am I even on the right track here? Nota Bene: The SD card is as-is - i.e. I haven't formatted it, so I would guess it would be a FAT-* format. The FAT-32 format has hard limits of filesize of 2GB (well above the filesizes I am dealing with) and a limit of number of files in the root directory. I am definitely not writing files in the root directory.

    Read the article

  • [LaTeX] breaking an environment across pages - the smart way

    - by Flavius
    Hi I am using the exercise package to display exercises in a book. I have redefined some commands like this, which basically adds some space, a pencil, and two hrule's before and after the exercise: \renewcommand{\ExerciseHeader}{\vskip 1em\hrule\vskip 1em\centerline{\textbf{\large\smallpencil \ExerciseHeaderNB\ExerciseHeaderTitle% \ExerciseHeaderDifficulty\ExerciseHeaderOrigin\medskip}}} \makeatletter\def\endExerciseEnv{\termineliste{1}\@EndExeBox\vskip .5em\hrule\vskip 1em}\makeatother Now this works, but there's a small problem: There are situations where only the \hrule ends up being at the bottom of a page, and the rest of the exercise goes on the next page. There is also the opposite behavior: the entire exercise is on one page, except the \hrule in "endExerciseEnv", which is flushed to the next page. My question is: How to force the top hrule come either together with the header of the exercise (caption, title, whatever not) and at least the first two paragraphs (or \ExePath and a paragraph, or anything like that, but there must be at last "two things", so it doesn't look ugly), OR be flushed altogether, with the entire exercise? Similar question for the bottom hrule: How to force it have at least two items in front of it on the visible page where the hrule itself goes to? Any LaTeX guru who knows that? Addendum I have asked in the past LaTeX questions like this and I've got answers which required me to do stuff manually, like "insert a \vskip here and there" or such. Let me be clear: This is a book, there's lot of exercises, and I NEED it be done "automatically", by going the proper way of redeclaring commands & co.

    Read the article

< Previous Page | 375 376 377 378 379 380 381 382 383 384 385 386  | Next Page >