Search Results

Search found 883 results on 36 pages for 'subset'.

Page 13/36 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • Cuboid inside generic polyhedron

    - by DOFHandler
    I am searching for an efficient algorithm to find if a cuboid is completely inside or completely outside or (not-inside and not-outside) a generic (concave or convex) polyhedron. The polyhedron is defined by a list of 3D points and a list of facets. Each facet is defined by the subset of the contour points ordinated such as the right-hand normal points outward the solid. Any suggestion? Thank you

    Read the article

  • How to read expected child nodes of a given node from schema in PHP?

    - by MartyIX
    I was wondering if there's an implementation of a XML schema reader that for an arbitrary node in XML schema provides list of nodes which are supposed to be present as child nodes of given node, restrictions on nodes and so on. I'm planning to program it for my purposes but I would like to know if it isn't solved somewhere. I really need only a small subset what I described above. Thanks for tips!

    Read the article

  • An Alternative to Views?

    - by Abs
    Hello all, I am just reading this article and I came across this: Filter: Remove any functions in the WHERE clause, don't include views in your Transact-SQL code, may need additional indexes. If I do not use views, what are the alternatives? I mean, in my situation, I want to select some data from a table and then use a few other select queries to work on the subset of data from the first select query? How can I do this efficiently? Thanks all

    Read the article

  • What is wrong in this simple Makefile

    - by Walidix
    SRC_VAR = test string for variable manipulation. TEST1_VAR = $(subset for,foo,${SRC_VAR}) all: @echo original str: ${SRC_VAR} @echo substitution: ${TEST1_VAR} This is the output: original str: test string for variable manipulation. substitution: The output should be: original str: My test string for variable manipulation. substitution: My test string foo variable manipulation.

    Read the article

  • Haskell Add Function Return to List Until Certain Length

    - by kienjakenobi
    I want to write a function which takes a list and constructs a subset of that list of a certain length based on the output of a function. If I were simply interested in the first 50 elements of the sorted list xs, then I would use fst (splitAt 50 (sort xs)). However, the problem is that elements in my list rely on other elements in the same list. If I choose element p, then I MUST also choose elements q and r, even if they are not in the first 50 elements of my list. I am using a function finderFunc which takes an element a from the list xs and returns a list with the element a and all of its required elements. finderFunc works fine. Now, the challenge is to write a function which builds a list whose total length is 50 based on multiple outputs of finderFunc. Here is my attempt at this: finish :: [a] -> [a] -> [a] --This is the base case, which adds nothing to the final list finish [] fs = [] --The function is recursive, so the fs variable is necessary so that finish -- can forward the incomplete list to itself. finish ps fs -- If the final list fs is too small, add elements to it | length fs < 50 && length (fs ++ newrs) <= 50 = fs ++ finish newps newrs -- If the length is met, then add nothing to the list and quit | length fs >= 50 = finish [] fs -- These guard statements are currently lacking, not the main problem | otherwise = finish [] fs where --Sort the candidate list sortedps = sort ps --(finderFunc a) returns a list of type [a] containing a and all the -- elements which are required to go with it. This is the interesting -- bit. rs is also a subset of the candidate list ps. rs = finderFunc (head sortedps) --Remove those elements which are already in the final list, because -- there can be overlap newrs = filter (`notElem` fs) rs --Remove the elements we will add to the list from the new list -- of candidates newps = filter (`notElem` rs) ps I realize that the above if statements will, in some cases, not give me a list of exactly 50 elements. This is not the main problem, right now. The problem is that my function finish does not work at all as I would expect it to. Not only does it produce duplicate elements in the output list, but it sometimes goes far above the total number of elements I want to have in the list. The way this is written, I usually call it with an empty list, such as: finish xs [], so that the list it builds on starts as an empty list.

    Read the article

  • mySQL: Can I make count() honor limit clause?

    - by Stomped
    I'm trying to get a count of records matching certain criteria within a subset of the total records. I tried (and assumed this would work) SELECT count(*) FROM records WHERE status = 'ADP' LIMIT 0,10 and I assumed this would tell me how many records of status ADP were in that set of 10 records. It doesn't - it returns, in this case 30, which is the total number of ADP records in the table. How do I just count up the records matching my criteria including the limit?

    Read the article

  • Setting custom SQL in django admin

    - by eugene y
    I'm trying to set up a proxy model in django admin. It will represent a subset of the original model. The code from models.py: class MyManager(models.Manager): def get_query_set(self): return super(MyManager, self).get_query_set().filter(some_column='value') class MyModel(OrigModel): objects = MyManager() class Meta: proxy = True Now instead of filter() I need to use a complex SELECT statement with JOINS. What's the proper way to inject it wholly to the custom manager?

    Read the article

  • Is there a standard place to store Spring library jar files?

    - by richj
    I've downloaded Spring 3.0.2 with dependencies and found that it contains 405 jar files. I usually keep third party libraries in a "lib" subdirectory, but there are so many Spring jars that it seems sensible to keep them separately so that they don't swamp the other libraries and to simplify version upgrades. I suspect that I want to keep the full set of libraries in Subversion, but only deploy the subset that is actually used. Do Spring users have a standard way to deal with this problem?

    Read the article

  • How to use an adjacency matrix to determine which rows to 'pass' to a function in r?

    - by dubhousing
    New to R, and I have a long-ish question: I have a shapefile/map, and I'm aiming to calculate a certain index for every polygon in that map, based on attributes of that polygon and each polygon that neighbors it. I have an adjacency matrix -- which I think is the same as a "1st-order queen contiguity weights matrix", although I'm not sure -- that describes which polygons border which other polygons, e.g., POLYID A B C D E A 0 0 1 0 1 B 0 0 1 0 0 C 1 1 0 1 0 D 0 0 1 0 1 E 1 0 0 1 0 The above indicates, for instance, that polygons 'C' and 'E' adjoin polygon 'A'; polygon 'B' adjoins only polygon 'C', etc. The attribute table I have has one polygon per row: POLYID TOT L10K 10_15K 15_20K ... A 500 24 30 77 ... Where TOT, L10K, etc. are the variables I use to calculate an index. There are 525 polygons/rows in my data, so I'd like to use the adjacency matrix to determine which rows' attributes to incorporate into the calculation of the index of interest. For now, I can calculate the index when I subset the rows that correspond to one 'bundle' of neighboring polygons, and then use a loop (if it's of interest, I'm calculating the Centile Gap Index, a measure of local income segregation). E.g., subsetting the 'neighborhood' of the Detroit City Schools: Detroit <- UNSD00[c(142,150,164,221,226,236,295,327,157,177,178,364,233,373,418,424,449,451,487),] Then record the marginal column proportions and a running total: catprops <- vector() for(i in 4:19) { catprops[(i-3)]<-sum(Detroit[,i])/sum(Detroit[,3]) } catprops <- as.data.frame(catprops) catprops[,2]<-cumsum(catprops[,1]) Columns 4:19 are the necessary ones in the attribute table. Then I use the following code to calculate the index -- note that the loop has "i in 1:19" because the Detroit subset has 19 polygons. cgidistsum <- 0 for(i in 1:19) { pranks <- vector() for(j in 4:19) { if (Detroit[i,j]==0) pranks <- append(pranks,0) else if (j == 4) pranks <- append(pranks,seq(0,catprops[1,2],by=catprops[1,2]/Detroit[i,j])) else pranks <- append(pranks,seq(catprops[j-4,2],catprops[j-3,2],by=catprops[j-3,1]/Detroit[i,j])) } distpranks <- vector() distpranks<-abs(pranks-median(pranks)) cgidistsum <- cgidistsum + sum(distpranks) } cgi <- (.25-(cgidistsum/sum(Detroit[,3])))/.25 My apologies if I've provided more information than is necessary. I would really like to exploit the adjacency matrix in order to calculate the CGI for each 'bundle' of these rows. If you happen to know how I could started with this, that would be great. and my apologies for any novice mistakes, I'm new to R!

    Read the article

  • C++ book for a c# developer

    - by Eldila
    I am a c# developer which finds himself having to relearn c++. The last time I programmed in c++ was in school and am looking for good books as a refresher. I want something that assumes previous programming exposure and gets straight to the point. Is there a book similar to K&R for c++? I know the language is bloated so a book that covers a subset of c++ would be ideal.

    Read the article

  • R equivalent of SELECT DISTINCT on two or more fields/variables

    - by wahalulu
    Say I have a dataframe df with two or more columns, is there an easy way to use unique() or other R function to create a subset of unique combinations of two or more columns? I know I can use sqldf() and write an easy "SELECT DISTINCT var1, var2, ... varN" query, but I am looking for an R way of doing this. It occurred to me to try ftable coerced to a dataframe and use the field names, but I also get the cross tabulations of combinations that don't exist in the dataset: uniques <- as.data.frame(ftable(df$var1, df$var2))

    Read the article

  • How to display two QuerySets in one table?

    - by Mark
    I have two different QuerySets which both return a list of Users (with different fields). How can I display them both in one HTML table? One Query will always be a subset of the other; I just want to fill in the missing data with 0s. How can I do this?

    Read the article

  • SSIS Lookup component tuning tips

    - by jamiet
    Yesterday evening I attended a London meeting of the UK SQL Server User Group at Microsoft’s offices in London Victoria. As usual it was both a fun and informative evening and in particular there seemed to be a few questions arising about tuning the SSIS Lookup component; I rattled off some comments and figured it would be prudent to drop some of them into a dedicated blog post, hence the one you are reading right now. Scene setting A popular pattern in SSIS is to use a Lookup component to determine whether a record in the pipeline already exists in the intended destination table or not and I cover this pattern in my 2006 blog post Checking if a row exists and if it does, has it changed? (note to self: must rewrite that blog post for SSIS2008). Fundamentally the SSIS lookup component (when using FullCache option) sucks some data out of a database and holds it in memory so that it can be compared to data in the pipeline. One of the big benefits of using SSIS dataflows is that they process data one buffer at a time; that means that not all of the data from your source exists in the dataflow at the same time and is why a SSIS dataflow can process data volumes that far exceed the available memory. However, that only applies to data in the pipeline; for reasons that are hopefully obvious ALL of the data in the lookup set must exist in the memory cache for the duration of the dataflow’s execution which means that any memory used by the lookup cache will not be available to be used as a pipeline buffer. Moreover, there’s an obvious correlation between the amount of data in the lookup cache and the time it takes to charge that cache; the more data you have then the longer it will take to charge and the longer you have to wait until the dataflow actually starts to do anything. For these reasons your goal is simple: ensure that the lookup cache contains as little data as possible. General tips Here is a simple tick list you can follow in order to tune your lookups: Use a SQL statement to charge your cache, don’t just pick a table from the dropdown list made available to you. (Read why in SELECT *... or select from a dropdown in an OLE DB Source component?) Only pick the columns that you need, ignore everything else Make the database columns that your cache is populated from as narrow as possible. If a column is defined as VARCHAR(20) then SSIS will allocate 20 bytes for every value in that column – that is a big waste if the actual values are significantly less than 20 characters in length. Do you need DT_WSTR typed columns or will DT_STR suffice? DT_WSTR uses twice the amount of space to hold values that can be stored using a DT_STR so if you can use DT_STR, consider doing so. Same principle goes for the numerical datatypes DT_I2/DT_I4/DT_I8. Only populate the cache with data that you KNOW you will need. In other words, think about your WHERE clause! Thinking outside the box It is tempting to build a large monolithic dataflow that does many things, one of which is a Lookup. Often though you can make better use of your available resources by, well, mixing things up a little and here are a few ideas to get your creative juices flowing: There is no rule that says everything has to happen in a single dataflow. If you have some particularly resource intensive lookups then consider putting that lookup into a dataflow all of its own and using raw files to pass the pipeline data in and out of that dataflow. Know your data. If you think, for example, that the majority of your incoming rows will match with only a small subset of your lookup data then consider chaining multiple lookup components together; the first would use a FullCache containing that data subset and the remaining data that doesn’t find a match could be passed to a second lookup that perhaps uses a NoCache lookup thus negating the need to pull all of that least-used lookup data into memory. Do you need to process all of your incoming data all at once? If you can process different partitions of your data separately then you can partition your lookup cache as well. For example, if you are using a lookup to convert a location into a [LocationId] then why not process your data one region at a time? This will mean your lookup cache only has to contain data for the location that you are currently processing and with the ability of the Lookup in SSIS2008 and beyond to charge the cache using a dynamically built SQL statement you’ll be able to achieve it using the same dataflow and simply loop over it using a ForEach loop. Taking the previous data partitioning idea further … a dataflow can contain more than one data path so why not split your data using a conditional split component and, again, charge your lookup caches with only the data that they need for that partition. Lookups have two uses: to (1) find a matching row from the lookup set and (2) put attributes from that matching row into the pipeline. Ask yourself, do you need to do these two things at the same time? After all once you have the key column(s) from your lookup set then you can use that key to get the rest of attributes further downstream, perhaps even in another dataflow. Are you using the same lookup data set multiple times? If so, consider the file caching option in SSIS 2008 and beyond. Above all, experiment and be creative with different combinations. You may be surprised at what works. Final  thoughts If you want to know more about how the Lookup component differs in SSIS2008 from SSIS2005 then I have a dedicated blog post about that at Lookup component gets a makeover. I am on a mini-crusade at the moment to get a BULK MERGE feature into the database engine, the thinking being that if the database engine can quickly merge massive amounts of data in a similar manner to how it can insert massive amounts using BULK INSERT then that’s a lot of work that wouldn’t have to be done in the SSIS pipeline. If you think that is a good idea then go and vote for BULK MERGE on Connect. If you have any other tips to share then please stick them in the comments. Hope this helps! @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Where can I find a useful Unicode fallback font for Mac OS X?

    - by Stephen Jennings
    On every browser I've tried (Firefox, Safari, Chrome, and Omniweb), when I go to a web page containing somewhat less-common characters, I can't see the glyphs. For example, on the Wikipedia page for the Bengali Language, the very first line contains a string of squares; on Windows, I can see the Bengali writing. On Windows, as long as I have the Arial Unicode MS font installed, these characters fall back to that font and display properly. Mac OS X doesn't seem to ship with a font containing these Unicode characters (it has Arial Unicode MS, but it must be a subset of the Windows version because Bengali doesn't display in that font). I checked on my Snow Leopard DVD and I installed "Additional Fonts" from the Optional Installs package, but I'm still missing many languages. Is there any good, free font that contains a large collection of languages? I know creating fonts is difficult and time-consuming, but it seems like including at least one font like this with operating systems should be standard by now.

    Read the article

  • Sync a specific folder of contacts to iPhone

    - by colemanm
    Is there a simple way to organize contacts in Outlook/Entourage and only have a subset of them synchronize with the iPhone over Exchange ActiveSync? Our CEO has thousands of contacts in his mailbox, but would prefer if only a small portion of them synched to his phone over the air... The iPhone's performance takes a huge hit keeping that massive dataset in order. If he could put some of the records in subfolders or something and only sync the top level, I think that would work for him. Does anyone know if this is possible?

    Read the article

  • Problems accessing FileZilla FTP Server from static IP

    - by Kurru
    I'm trying to connect to my FTP server from my external IP address on Comcast Business. On the gateway I've set up port-forwarding on ports 20-21 to my server. Additionally I've forwarded ports 7000-8000 to my server for use in passive mode. In my FileZilla Server application I've set up passive mode to use my static IP and to use the subset of ports listed above. Unfortunately, it doesn't work through the external static IP for some reason, but I can internally. When I try to connect through static IP, the FileZila monitor says Connected, sending welcome message.... 220 FileZillaServer version 0.9.37 beta could not send reply, disconnected My firewall doesn't register any block events and windows firewall is disabled. What am I doing wrong or missing?

    Read the article

  • Linux Transparent Bridge for Network

    - by Blackninja543
    I am attempting to set up a semi-transparent bridge. I say semi because I want it to act as a transparent tap for all traffic moving through both sides of the bridge. What I also want is to have the "green zone" accessible to a web interface for the bridge that will display all results of the IDS and other network monitoring tools. My example would be as such: eth0 <--> bridge(br0) <--> eth1 The entire network would be on the same subset however anything coming from eth0 to eth1 would be accepted. The only time anything would be drop is if the eth0 attempted to access br0. If someone attempts to access the web interface on br0 through eth1 it will succeed. My biggest problem I feel is if I attempt to block anything from eth0 to br0 this will drop the bridge all together.

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >