Search Results

Search found 57986 results on 2320 pages for 'breadth first search'.

Page 448/2320 | < Previous Page | 444 445 446 447 448 449 450 451 452 453 454 455  | Next Page >

  • Why Is Vertical Resolution Monitor Resolution so Often a Multiple of 360?

    - by Jason Fitzpatrick
    Stare at a list of monitor resolutions long enough and you might notice a pattern: many of the vertical resolutions, especially those of gaming or multimedia displays, are multiples of 360 (720, 1080, 1440, etc.) But why exactly is this the case? Is it arbitrary or is there something more at work? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader Trojandestroy recently noticed something about his display interface and needs answers: YouTube recently added 1440p functionality, and for the first time I realized that all (most?) vertical resolutions are multiples of 360. Is this just because the smallest common resolution is 480×360, and it’s convenient to use multiples? (Not doubting that multiples are convenient.) And/or was that the first viewable/conveniently sized resolution, so hardware (TVs, monitors, etc) grew with 360 in mind? Taking it further, why not have a square resolution? Or something else unusual? (Assuming it’s usual enough that it’s viewable). Is it merely a pleasing-the-eye situation? So why have the display be a multiple of 360? The Answer SuperUser contributor User26129 offers us not just an answer as to why the numerical pattern exists but a history of screen design in the process: Alright, there are a couple of questions and a lot of factors here. Resolutions are a really interesting field of psychooptics meeting marketing. First of all, why are the vertical resolutions on youtube multiples of 360. This is of course just arbitrary, there is no real reason this is the case. The reason is that resolution here is not the limiting factor for Youtube videos – bandwidth is. Youtube has to re-encode every video that is uploaded a couple of times, and tries to use as little re-encoding formats/bitrates/resolutions as possible to cover all the different use cases. For low-res mobile devices they have 360×240, for higher res mobile there’s 480p, and for the computer crowd there is 360p for 2xISDN/multiuser landlines, 720p for DSL and 1080p for higher speed internet. For a while there were some other codecs than h.264, but these are slowly being phased out with h.264 having essentially ‘won’ the format war and all computers being outfitted with hardware codecs for this. Now, there is some interesting psychooptics going on as well. As I said: resolution isn’t everything. 720p with really strong compression can and will look worse than 240p at a very high bitrate. But on the other side of the spectrum: throwing more bits at a certain resolution doesn’t magically make it better beyond some point. There is an optimum here, which of course depends on both resolution and codec. In general: the optimal bitrate is actually proportional to the resolution. So the next question is: what kind of resolution steps make sense? Apparently, people need about a 2x increase in resolution to really see (and prefer) a marked difference. Anything less than that and many people will simply not bother with the higher bitrates, they’d rather use their bandwidth for other stuff. This has been researched quite a long time ago and is the big reason why we went from 720×576 (415kpix) to 1280×720 (922kpix), and then again from 1280×720 to 1920×1080 (2MP). Stuff in between is not a viable optimization target. And again, 1440P is about 3.7MP, another ~2x increase over HD. You will see a difference there. 4K is the next step after that. Next up is that magical number of 360 vertical pixels. Actually, the magic number is 120 or 128. All resolutions are some kind of multiple of 120 pixels nowadays, back in the day they used to be multiples of 128. This is something that just grew out of LCD panel industry. LCD panels use what are called line drivers, little chips that sit on the sides of your LCD screen that control how bright each subpixel is. Because historically, for reasons I don’t really know for sure, probably memory constraints, these multiple-of-128 or multiple-of-120 resolutions already existed, the industry standard line drivers became drivers with 360 line outputs (1 per subpixel). If you would tear down your 1920×1080 screen, I would be putting money on there being 16 line drivers on the top/bottom and 9 on one of the sides. Oh hey, that’s 16:9. Guess how obvious that resolution choice was back when 16:9 was ‘invented’. Then there’s the issue of aspect ratio. This is really a completely different field of psychology, but it boils down to: historically, people have believed and measured that we have a sort of wide-screen view of the world. Naturally, people believed that the most natural representation of data on a screen would be in a wide-screen view, and this is where the great anamorphic revolution of the ’60s came from when films were shot in ever wider aspect ratios. Since then, this kind of knowledge has been refined and mostly debunked. Yes, we do have a wide-angle view, but the area where we can actually see sharply – the center of our vision – is fairly round. Slightly elliptical and squashed, but not really more than about 4:3 or 3:2. So for detailed viewing, for instance for reading text on a screen, you can utilize most of your detail vision by employing an almost-square screen, a bit like the screens up to the mid-2000s. However, again this is not how marketing took it. Computers in ye olden days were used mostly for productivity and detailed work, but as they commoditized and as the computer as media consumption device evolved, people didn’t necessarily use their computer for work most of the time. They used it to watch media content: movies, television series and photos. And for that kind of viewing, you get the most ‘immersion factor’ if the screen fills as much of your vision (including your peripheral vision) as possible. Which means widescreen. But there’s more marketing still. When detail work was still an important factor, people cared about resolution. As many pixels as possible on the screen. SGI was selling almost-4K CRTs! The most optimal way to get the maximum amount of pixels out of a glass substrate is to cut it as square as possible. 1:1 or 4:3 screens have the most pixels per diagonal inch. But with displays becoming more consumery, inch-size became more important, not amount of pixels. And this is a completely different optimization target. To get the most diagonal inches out of a substrate, you want to make the screen as wide as possible. First we got 16:10, then 16:9 and there have been moderately successful panel manufacturers making 22:9 and 2:1 screens (like Philips). Even though pixel density and absolute resolution went down for a couple of years, inch-sizes went up and that’s what sold. Why buy a 19″ 1280×1024 when you can buy a 21″ 1366×768? Eh… I think that about covers all the major aspects here. There’s more of course; bandwidth limits of HDMI, DVI, DP and of course VGA played a role, and if you go back to the pre-2000s, graphics memory, in-computer bandwdith and simply the limits of commercially available RAMDACs played an important role. But for today’s considerations, this is about all you need to know. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

  • Regex Searching in Emacs

    - by Inaimathi
    I'm trying to write some Elisp code to format a bunch of legacy files. The idea is that if a file contains a section like "<meta name=\"keywords\" content=\"\\(.*?\\)\" />", then I want to insert a section that contains existing keywords. If that section is not found, I want to insert my own default keywords into the same section. I've got the following function: (defun get-keywords () (re-search-forward "<meta name=\"keywords\" content=\"\\(.*?\\)\" />") (goto-char 0) ;The section I'm inserting will be at the beginning of the file (or (march-string 1) "Rubber duckies and cute ponies")) ;;or whatever the default keywords are When the function fails to find its target, it returns Search failed: "[regex here]" and prevents the rest of evaluation. Is there a way to have it return the default string, and ignore the error?

    Read the article

  • MySQL: Copy a field to another table

    - by harpax
    I have a table posts that could look like this: id | title | body | created | .. ------------------------------------------- I would like to use the boolean search feature that is offered by a MyISAM Table, but the posts table is InnoDB. So I created another table 'post_contents' that looks like this: post_id | body -------------------- That table is already filled with some contents and I can use the boolean search. However, I need to move the title field in the post_contents table as well and then copy the existing title-data to the new field. I know about the INSERT .. SELECT syntax, but I don't seem to be able to create the correct query.

    Read the article

  • Can MYSQL filter by date if date is stored as text? ex "02/10/1984"

    - by Roeland
    Hello! I am trying to modify an app for a client which has already a database of over 1000 items. The dates are stored as text in the database with the format "02/10/1984". The system allows you to add and remove fields to the catalog dynamically and it also allows the advanced search to have specific fields be allowed. The problem is that it wasn't designed with dates in mind, so when I set a field as a date, and try to search by a range the query is trying to do a AND (cfv0.value = 01/02/2004 AND cfv0.value <= 05/03/2008) . I can make it so the date range passed is a numeric time value. Is there a way that when sending the query, it takes the text fields (with the date) and converts it to numeric time value so at that point I am basically just comparing numbers which would work fine. I do not have the option to change all the current date to numeric value due to the way the dynamic fields are set up. Thanks guys!

    Read the article

  • What is the full "for" loop syntax in C (and others in case they are compatible) ?

    - by fmsf
    I have seen some very weird for loops when reading other people's code. I have been trying to search for a full syntax explanation for the for loop in C but it is very hard because the word "for" appears in unrelated sentences making the search almost impossible to Google effectively. This question came to my mind after reading this thread which made me curious again. The for here: for(p=0;p+=(a&1)*b,a!=1;a>>=1,b<<=1); In the middle condition there is a comma separating the two pieces of code, what does this comma do? The comma on the right side I understand as it makes both a>>=1 and b<<=1. But within a loop exit condition, what happens? Does it exit when p==0, when a==1 or when both happen? It would be great if anyone could help me understand this and maybe point me in the direction of a full for loop syntax description.

    Read the article

  • Call asp.net web service from php

    - by SamB09
    Hi im calling an aps.net web service from a php service. The service searches both databases with a search parameter. Im not sure how to pass a search parameter to the asp.net service. Code is below. ( there is no search paramter currently but im just interested in how it would be passed to the asp.net service) $link = mysql_connect($host, $user, $passwd); mysql_select_db($dbName); $query = 'SELECT firstname, surname, phone, location FROM staff ORDER BY surname'; $result = mysql_query($query,$link); // if there is a result if ( mysql_num_rows($result) > 0 ) { // set up a DOM object $xmlDom1 = new DOMDocument(); $xmlDom1->appendChild($xmlDom1->createElement('directory')); $xmlRoot = $xmlDom1->documentElement; // loop over the rows in the result while ( $row = mysql_fetch_row($result) ) { $xmlPerson = $xmlDom1->createElement('staff'); $xmlFname = $xmlDom1->createElement('fname'); $xmlText = $xmlDom1->createTextNode($row[0]); $xmlFname->appendChild($xmlText); $xmlPerson->appendChild($xmlFname); $xmlSname = $xmlDom1->createElement('sname'); $xmlText = $xmlDom1->createTextNode($row[1]); $xmlSname->appendChild($xmlText); $xmlPerson->appendChild($xmlSname); $xmlTel = $xmlDom1->createElement('phone'); $xmlText = $xmlDom1->createTextNode($row[2]); $xmlTel->appendChild($xmlText); $xmlPerson->appendChild($xmlTel); $xmlLoc = $xmlDom1->createElement('loc'); $xmlText = $xmlDom1->createTextNode($row[3]); $xmlLoc->appendChild($xmlText); $xmlPerson->appendChild($xmlLoc); $xmlRoot->appendChild($xmlPerson); } } // // instance a SOAP client to the dotnet web service and read it into a DOM object // (this really should have an exception handler) // $client = new SoapClient('http://stuiis.cms.gre.ac.uk/mk05/dotnet/dataBind01/phoneBook.asmx?WSDL'); $xmlString = $client->getDirectoryDom()->getDirectoryDomResult->any; $xmlDom2 = new DOMDocument(); $xmlDom2->loadXML($xmlString); // merge the second DOM object into the first foreach ( $xmlDom2->documentElement->childNodes as $staffNode ) { $xmlPerson = $xmlDom1->createElement($staffNode->nodeName); foreach ( $staffNode->childNodes as $xmlNode ) { $xmlElement = $xmlDom1->createElement($xmlNode->nodeName); $xmlText = $xmlDom1->createTextNode($xmlNode->nodeValue); $xmlElement->appendChild($xmlText); $xmlPerson->appendChild($xmlElement); } $xmlRoot->appendChild($xmlPerson); } // return result echo $xmlDom

    Read the article

  • searching between dates in MYSQL in this format 03/17/10.11:22:45

    - by Kelso
    I have a script that automatically populates a mysql database with data every hour. It populates the date field like 03/17/10.12:34:11 and so on. I'm working on pulling data based on 1 day at a time from a search script. If i use select * from call_logs where call_initiated between '03/17/10.12:00:00' and '03/17/10.13:00:00' it works, but when I try to add the rest of the search params, it ignores the call_initiated field. select * from call_logs where caller_dn='2x9xxx0000' OR called_dn='2x9xxx0000' AND call_initiated between '03/17/10.12:00:00' and '03/17/10.13:00:00' ^-- I x'd out a couple of the numbers. I've also tried without the between function, and used = <= to pull the records, but have the same results. Im sure its an oversight, thanks in advance.

    Read the article

  • RESTful Design: Paging Collections

    - by Koen Bok
    I am designing a REST api that needs paging (per x) enforces from the server side. What would be the right way to page through any collection of resources: Option 1: GET /resource/page/<pagenr> GET /resource/tags/<tag1>,<tag2>/page/<pagenr> GET /resource/search/<query>/page/<pagenr> Option 2: GET /resource/?page=<pagenr> GET /resource/tags/<tag1>,<tag2>?page=<pagenr> GET /resource/search/<query>?page=<pagenr> If 1, what should I do with GET /resource? Redirect to /resource/page/0, reply with some error or reply with the exact same as /resource/page/0 without redirecting?

    Read the article

  • Scheduling thread tiles with C++ AMP

    - by Daniel Moth
    This post assumes you are totally comfortable with, what some of us call, the simple model of C++ AMP, i.e. you could write your own matrix multiplication. We are now ready to explore the tiled model, which builds on top of the non-tiled one. Tiling the extent We know that when we pass a grid (which is just an extent under the covers) to the parallel_for_each call, it determines the number of threads to schedule and their index values (including dimensionality). For the single-, two-, and three- dimensional cases you can go a step further and subdivide the threads into what we call tiles of threads (others may call them thread groups). So here is a single-dimensional example: extent<1> e(20); // 20 units in a single dimension with indices from 0-19 grid<1> g(e);      // same as extent tiled_grid<4> tg = g.tile<4>(); …on the 3rd line we subdivided the single-dimensional space into 5 single-dimensional tiles each having 4 elements, and we captured that result in a concurrency::tiled_grid (a new class in amp.h). Let's move on swiftly to another example, in pictures, this time 2-dimensional: So we start on the left with a grid of a 2-dimensional extent which has 8*6=48 threads. We then have two different examples of tiling. In the first case, in the middle, we subdivide the 48 threads into tiles where each has 4*3=12 threads, hence we have 2*2=4 tiles. In the second example, on the right, we subdivide the original input into tiles where each has 2*2=4 threads, hence we have 4*3=12 tiles. Notice how you can play with the tile size and achieve different number of tiles. The numbers you pick must be such that the original total number of threads (in our example 48), remains the same, and every tile must have the same size. Of course, you still have no clue why you would do that, but stick with me. First, we should see how we can use this tiled_grid, since the parallel_for_each function that we know expects a grid. Tiled parallel_for_each and tiled_index It turns out that we have additional overloads of parallel_for_each that accept a tiled_grid instead of a grid. However, those overloads, also expect that the lambda you pass in accepts a concurrency::tiled_index (new in amp.h), not an index<N>. So how is a tiled_index different to an index? A tiled_index object, can have only 1 or 2 or 3 dimensions (matching exactly the tiled_grid), and consists of 4 index objects that are accessible via properties: global, local, tile_origin, and tile. The global index is the same as the index we know and love: the global thread ID. The local index is the local thread ID within the tile. The tile_origin index returns the global index of the thread that is at position 0,0 of this tile, and the tile index is the position of the tile in relation to the overall grid. Confused? Here is an example accompanied by a picture that hopefully clarifies things: array_view<int, 2> data(8, 6, p_my_data); parallel_for_each(data.grid.tile<2,2>(), [=] (tiled_index<2,2> t_idx) restrict(direct3d) { /* todo */ }); Given the code above and the picture on the right, what are the values of each of the 4 index objects that the t_idx variables exposes, when the lambda is executed by T (highlighted in the picture on the right)? If you can't work it out yourselves, the solution follows: t_idx.global       = index<2> (6,3) t_idx.local          = index<2> (0,1) t_idx.tile_origin = index<2> (6,2) t_idx.tile             = index<2> (3,1) Don't move on until you are comfortable with this… the picture really helps, so use it. Tiled Matrix Multiplication Example – part 1 Let's paste here the C++ AMP matrix multiplication example, bolding the lines we are going to change (can you guess what the changes will be?) 01: void MatrixMultiplyTiled_Part1(vector<float>& vC, const vector<float>& vA, const vector<float>& vB, int M, int N, int W) 02: { 03: 04: array_view<const float,2> a(M, W, vA); 05: array_view<const float,2> b(W, N, vB); 06: array_view<writeonly<float>,2> c(M, N, vC); 07: parallel_for_each(c.grid, 08: [=](index<2> idx) restrict(direct3d) { 09: 10: int row = idx[0]; int col = idx[1]; 11: float sum = 0.0f; 12: for(int i = 0; i < W; i++) 13: sum += a(row, i) * b(i, col); 14: c[idx] = sum; 15: }); 16: } To turn this into a tiled example, first we need to decide our tile size. Let's say we want each tile to be 16*16 (which assumes that we'll have at least 256 threads to process, and that c.grid.extent.size() is divisible by 256, and moreover that c.grid.extent[0] and c.grid.extent[1] are divisible by 16). So we insert at line 03 the tile size (which must be a compile time constant). 03: static const int TS = 16; ...then we need to tile the grid to have tiles where each one has 16*16 threads, so we change line 07 to be as follows 07: parallel_for_each(c.grid.tile<TS,TS>(), ...that means that our index now has to be a tiled_index with the same characteristics as the tiled_grid, so we change line 08 08: [=](tiled_index<TS, TS> t_idx) restrict(direct3d) { ...which means, without changing our core algorithm, we need to be using the global index that the tiled_index gives us access to, so we insert line 09 as follows 09: index<2> idx = t_idx.global; ...and now this code just works and it is tiled! Closing thoughts on part 1 The process we followed just shows the mechanical transformation that can take place from the simple model to the tiled model (think of this as step 1). In fact, when we wrote the matrix multiplication example originally, the compiler was doing this mechanical transformation under the covers for us (and it has additional smarts to deal with the cases where the total number of threads scheduled cannot be divisible by the tile size). The point is that the thread scheduling is always tiled, even when you use the non-tiled model. But with this mechanical transformation, we haven't gained anything… Hint: our goal with explicitly using the tiled model is to gain even more performance. In the next post, we'll evolve this further (beyond what the compiler can automatically do for us, in this first release), so you can see the full usage of the tiled model and its benefits… Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • SQL syntax error

    - by Robert
    Im using Microsoft SQL Server which I think is T-SQL or ANSI SQL. I want to search a database with a string. The matches that fit the begging of the string should come first then sort alphabetically. I.e. If the table contains FOO, BAR and RAP a search for the string 'R' should yield: RAP BAR In that order. Here is my attempt: SELECT Name FROM MyTable WHERE (Name LIKE '%' + @name + '%') ORDER BY (IF(Name LIKE @name + '%',1,0)) The error message is: "must declare scalar variable @name"

    Read the article

  • DBA Best Practices - A Blog Series: Episode 1 - Backups

    - by Argenis
      This blog post is part of the DBA Best Practices series, on which various topics of concern for daily database operations are discussed. Your feedback and comments are very much welcome, so please drop by the comments section and be sure to leave your thoughts on the subject. Morning Coffee When I was a DBA, the first thing I did when I sat down at my desk at work was checking that all backups had completed successfully. It really was more of a ritual, since I had a dual system in place to check for backup completion: 1) the scheduled agent jobs to back up the databases were set to alert the NOC in failure, and 2) I had a script run from a central server every so often to check for any backup failures. Why the redundancy, you might ask. Well, for one I was once bitten by the fact that database mail doesn't work 100% of the time. Potential causes for failure include issues on the SMTP box that relays your server email, firewall problems, DNS issues, etc. And so to be sure that my backups completed fine, I needed to rely on a mechanism other than having the servers do the taking - I needed to interrogate the servers and ask each one if an issue had occurred. This is why I had a script run every so often. Some of you might have monitoring tools in place like Microsoft System Center Operations Manager (SCOM) or similar 3rd party products that would track all these things for you. But at that moment, we had no resort but to write our own Powershell scripts to do it. Now it goes without saying that if you don't have backups in place, you might as well find another career. Your most sacred job as a DBA is to protect the data from a disaster, and only properly safeguarded backups can offer you peace of mind here. "But, we have a cluster...we don't need backups" Sadly I've heard this line more than I would have liked to. You need to understand that a cluster is comprised of shared storage, and that is precisely your single point of failure. A cluster will protect you from an issue at the Operating System level, and also under an outage of any SQL-related service or dependent devices. But it will most definitely NOT protect you against corruption, nor will it protect you against somebody deleting data from a table - accidentally or otherwise. Backup, fine. How often do I take a backup? The answer to this is something you will hear frequently when working with databases: it depends. What does it depend on? For one, you need to understand how much data your business is willing to lose. This is what's called Recovery Point Objective, or RPO. If you don't know how much data your business is willing to lose, you need to have an honest and realistic conversation about data loss expectations with your customers, internal or external. From my experience, their first answer to the question "how much data loss can you withstand?" will be "zero". In that case, you will need to explain how zero data loss is very difficult and very costly to achieve, even in today's computing environments. Do you want to go ahead and take full backups of all your databases every hour, or even every day? Probably not, because of the impact that taking a full backup can have on a system. That's what differential and transaction log backups are for. Have I answered the question of how often to take a backup? No, and I did that on purpose. You need to think about how much time you have to recover from any event that requires you to restore your databases. This is what's called Recovery Time Objective. Again, if you go ask your customer how long of an outage they can withstand, at first you will get a completely unrealistic number - and that will be your starting point for discussing a solution that is cost effective. The point that I'm trying to get across is that you need to have a plan. This plan needs to be practiced, and tested. Like a football playbook, you need to rehearse the moves you'll perform when the time comes. How often is up to you, and the objective is that you feel better about yourself and the steps you need to follow when emergency strikes. A backup is nothing more than an untested restore Backups are files. Files are prone to corruption. Put those two together and realize how you feel about those backups sitting on that network drive. When was the last time you restored any of those? Restoring your backups on another box - that, by the way, doesn't have to match the specs of your production server - will give you two things: 1) peace of mind, because now you know that your backups are good and 2) a place to offload your consistency checks with DBCC CHECKDB or any of the other DBCC commands like CHECKTABLE or CHECKCATALOG. This is a great strategy for VLDBs that cannot withstand the additional load created by the consistency checks. If you choose to offload your consistency checks to another server though, be sure to run DBCC CHECKDB WITH PHYSICALONLY on the production server, and if you're using SQL Server 2008 R2 SP1 CU4 and above, be sure to enable traceflags 2562 and/or 2549, which will speed up the PHYSICALONLY checks further - you can read more about this enhancement here. Back to the "How Often" question for a second. If you have the disk, and the network latency, and the system resources to do so, why not backup the transaction log often? As in, every 5 minutes, or even less than that? There's not much downside to doing it, as you will have to clear the log with a backup sooner than later, lest you risk running out space on your tlog, or even your drive. The one drawback to this approach is that you will have more files to deal with at restore time, and processing each file will add a bit of extra time to the entire process. But it might be worth that time knowing that you minimized the amount of data lost. Again, test your plan to make sure that it matches your particular needs. Where to back up to? Network share? Locally? SAN volume? This is another topic where everybody has a favorite choice. So, I'll stick to mentioning what I like to do and what I consider to be the best practice in this regard. I like to backup to a SAN volume, i.e., a drive that actually lives in the SAN, and can be easily attached to another server in a pinch, saving you valuable time - you wouldn't need to restore files on the network (slow) or pull out drives out a dead server (been there, done that, it’s also slow!). The key is to have a copy of those backup files made quickly, and, if at all possible, to a remote target on a different datacenter - or even the cloud. There are plenty of solutions out there that can help you put such a solution together. That right there is the first step towards a practical Disaster Recovery plan. But there's much more to DR, and that's material for a different blog post in this series.

    Read the article

  • Documentation of a software project

    - by anijhaw
    I am working with a team that works on a very large software project, we have tons of Documentation that is written in MS WORD format with nohyperlinked indexes, no search ability. Everyday we waste our time trying to find the exact document or reference. I was thinking if there was way or even a professional tool that would convert all this into a wiki format and maybe with a little manual (painful) help be organised into something that improves the accessibility. I use Google Desktop Search to make my life a little easier but its not the best solution I just want to know if any of you faced similar problems and possible solutions to this issue.

    Read the article

  • Exposing headers on iPhone static library

    - by leolobato
    Hello guys, I've followed this tutorial for setting up a static library with common classes from 3 projects we are working on. It's pretty simple, create a new static library project on xcode, add the code there, a change some headers role from project to public. The tutorial says I should add my library folder to the header search paths recursively. Is this the right way to go? I mean, on my library project, I have files separated in folders like Global/, InfoScreen/, Additions/. I was trying to setup one LOKit.h file on the root folder, and inside that file #import everything I need to expose. So on my host project I don't need to add the folder recursively to the header search path, and would just #import "LOKit.h". But I couldn't get this to work, the host project won't build complaining about all the classes I didn't add to LOKit.h, even though the library project builds. So, my question is, what is the right way of exposing header files when I setup a Cocoa Touch Static Library project on xCode?

    Read the article

  • Simple File-based Record Storage with Fast Text Searching for Compact Framework and Silverlight

    - by Eric Farr
    I have a single table with lots of records ( 100k) that I need to be able to index and search on several text fields. The easiest searches will have the first part of the string specified (eg, LIKE 'ABC%' in SQL). The tougher searches will need to search for any substring within the text fields (eg, LIKE '%ABC%' in SQL). I need to run on the Compact Framework. SQL Compact is a memory hog and overkill for my one table. Besides, I'd like to be able to run on Silverlight 4 eventually. The file and indexes can be generated on the full .NET Framework and I only need read capability on the Compact Framework. My records are not especially large and can be expressed in fix length format. I'm looking for some existing code or libraries to avoid having to write a file-based BTree implementation from scratch.

    Read the article

  • Find products that have a list of attributes.

    - by bellesebastien
    I'm stuck trying to solve a problem that's proving to be more difficult than it seems. Consider there is a table that associates products with attributes, it looks like this: Products_id | Attribute_id 21 | 456 21 | 231 21 | 26 22 | 456 22 | 26 22 | 116 23 | 116 23 | 231 Next, I have a list of attribute_ids which I want to use in order to get the products that have all the attributes in that list. For example if I search in the table above using this list (456, 26) I should get these product_ids 21 and 22. Another example, if I search for (116, 231) I should get an empty response since there are no products that have both these attributes. How can I achieve this using one query? I hope I made my question clear. Thanks.

    Read the article

  • Find Rows in Vertical Line-separated values in MySQl?

    - by Trez
    Let say i have a field 'category' with the value '1|2|3'. I want to search in mysql such that it will return all rows matching my search parameter into the values of the category. for example: $cat_id = 1; SELECT * FROM `myTable` WHERE cat_id is equal or found in category with values '1|2|3'... something like that..i do not know how to put it in correct sql query. Any Ideas? thanks in advance.

    Read the article

  • What are some of best Javascript memory detecting tools?

    - by Philip Fourie
    Our team is faced with slow but serious Javascript memory leak. We have read up on the normal causes for memory leaks in Javascript (eg. closures and circular references). We tried to avoid those pitfalls in the code but it likely we still have unknown mistakes left in our code. I started my search for available tools but would like input from people with actual experience with these tools. Some of the tools I found so far (but have no idea how good and useful they would be for our problem): Sieve Drip JavaScript Memory Leak Detector Our search is not limited to free tools, it will be a bonus, but more importantly something that will get the job done. We do the following in our Javascript code: AJAX calls to a .NET WCF back-end that send back JSON data Manipulate the DOM Keep a fairly sized object model in the Javascript to store current state

    Read the article

  • How to loop through the FormCollection to check if textboxes have values?

    - by Sasha
    Hi there. I have a search page that has 6 textboxes which i pass as FormCollection to the action in the controller. I dont want to search for records if ther is no values in textboxes. Is there a way to loop through all textboxes in FormCollection without to check which ones have values in them? I am a student in the college and this project is part of my summer experience program. I realize that this is a newbie question :) Thank you!

    Read the article

  • Constructing T-SQL WHERE condition at runtime

    - by Nickson
    I would like to implement a search function where a user passes all the arguments to the "WHERE" clause at runtime. For example in query below, SELECT Col1, Col2, Col3, Col4 FROM MyTable WHERE Col2 = John 1.Now what i want is to give the user a dropdownlist of columnns such that the user selects a column to search by at runtime Also instead of precoding Col2 = John, i want the user to choose their own operator at runtime(such as choosing between =, <, <, <, LIKE, IN) i basically want to contruct a query like SELECT Col1, Col2, Col3, Col4 FROM MyTable WHERE (@FieldToSearchBy e.g Col3, @OperatorToUserInSearach e.g LIKE, @ValueToSearch e.g John) I want to pass @FieldToSearchBy, @OperatorToUserInSearach, @ValueToSearch) as user specified parameters at runtime I want to do this with a TableAdpter like in this example http://www.codeproject.com/KB/database/TableAdapter.aspx

    Read the article

  • Why wont this sort in Solr work?

    - by Camran
    I need to sort on a date-field type, which name is "mod_date". It works like this in the browser adress-bar: http://localhost:8983/solr/select/?&q=bmw&sort=mod_date+desc But I am using a phpSolr client which sends an URL to Solr, and the url sent is this: fq=+category%3A%22Bilar%22+%2B+car_action%3AS%C3%A4ljes&version=1.2&wt=json&json.nl=map&q=%2A%3A%2A&start=0&rows=5&sort=mod_date+desc // This wont work and is echoed after this in php: $queryString = http_build_query($params, null, $this->_queryStringDelimiter); $queryString = preg_replace('/%5B(?:[0-9]|[1-9][0-9]+)%5D=/', '=', $queryString); This wont work, I dont know why! Everything else works fine, all right fields are returned. But the sort doesn't work. Any ideas? Thanks BTW: The field "mod_date" contains something like: 2010-03-04T19:37:22.5Z EDIT: First I use PHP to send this to a SolrPhpClient which is another php-file called service.php: require_once('../SolrPhpClient/Apache/Solr/Service.php'); $solr = new Apache_Solr_Service('localhost', 8983, '/solr/'); $results = $solr->search($querystring, $p, $limit, $solr_params); $solr_params is an array which contains the solr-parameters (q, fq, etc). Now, in service.php: $params['version'] = self::SOLR_VERSION; // common parameters in this interface $params['wt'] = self::SOLR_WRITER; $params['json.nl'] = $this->_namedListTreatment; $params['q'] = $query; $params['sort'] = 'mod_date desc'; // HERE IS THE SORT I HAVE PROBLEM WITH $params['start'] = $offset; $params['rows'] = $limit; $queryString = http_build_query($params, null, $this->_queryStringDelimiter); $queryString = preg_replace('/%5B(?:[0-9]|[1-9][0-9]+)%5D=/', '=', $queryString); if ($method == self::METHOD_GET) { return $this->_sendRawGet($this->_searchUrl . $this->_queryDelimiter . $queryString); } else if ($method == self::METHOD_POST) { return $this->_sendRawPost($this->_searchUrl, $queryString, FALSE, 'application/x-www-form-urlencoded'); } The $results contain the results from Solr... So this is the way I need to get to work (via php). This code below (also on top of this Q) works but thats because I paste it into the adress bar manually, not via the PHPclient. But thats just for debugging, I need to get it to work via the PHPclient: http://localhost:8983/solr/select/?&q=bmw&sort=mod_date+des // Not via phpclient, but works UPDATE (2010-03-08): I have tried Donovans codes (the urls) and they work fine. Now, I have noticed that it is one of the parameters causing the 'SORT' not to work. This parameter is the "wt" parameter. If we take the url from top of this Q, (fq=+category%3A%22Bilar%22+%2B+car_action%3AS%C3%A4ljes&version=1.2&wt=json&json.nl=map&q=%2A%3A%2A&start=0&rows=5&sort=mod_date+desc), and just simply remove the "wt" parameter, then the sort works. BUT the results appear differently, thus making my php code not able to recognize the results I believe. Donovan would know this I think. I am guessing in order for the PHPClient to work, the results must be in a specific structure, which gets messed up as soon as I remove the wt parameter. Donovan, help me please... Here is some background what I use your SolrPhpClient for: I have a classifieds website, which uses MySql. But for the searching I am using Solr to search some indexed fields. Then Solr returns an array of ID:numbers (for all matches of the search criteria). Then I use those ID:numbers to find everything in a MySql db and fetch all other information (example is not searchable information). So simplified: Search - Solr returns all matches in an array of ID:nrs - Id:numbers from Solr are the same as the Id numbers in the MySql db, so I can just make a simple match agains every record with the ID matching the ID from the Solr results array. I don't use Faceting, no boosting, no relevancy or other fancy stuff. I only sort by the latest classified put, and give the option to users to also sort on the cheapest price. Nothing more. Then I use the "fq" parameter to do queries on different fields in Solr depending on category chosen by users (example "cars" in this case which in my language is "Bilar"). I am really stuck with this problem here... Thanks for all help

    Read the article

  • DBA Best Practices - A Blog Series: Episode 1 - Backups

    - by Argenis
      This blog post is part of the DBA Best Practices series, on which various topics of concern for daily database operations are discussed. Your feedback and comments are very much welcome, so please drop by the comments section and be sure to leave your thoughts on the subject. Morning Coffee When I was a DBA, the first thing I did when I sat down at my desk at work was checking that all backups have completed successfully. It really was more of a ritual, since I had a dual system in place to check for backup completion: 1) the scheduled agent jobs to back up the databases were set to alert the NOC in failure, and 2) I had a script run from a central server every so often to check for any backup failures. Why the redundancy, you might ask. Well, for one I was once bitten by the fact that database mail doesn't work 100% of the time. Potential causes for failure include issues on the SMTP box that relays your server email, firewall problems, DNS issues, etc. And so to be sure that my backups completed fine, I needed to rely on a mechanism other than having the servers do the taking - I needed to interrogate the servers and ask each one if an issue had occurred. This is why I had a script run every so often. Some of you might have monitoring tools in place like Microsoft System Center Operations Manager (SCOM) or similar 3rd party products that would track all these things for you. But at that moment, we had no resort but to write our own Powershell scripts to do it. Now it goes without saying that if you don't have backups in place, you might as well find another career. Your most sacred job as a DBA is to protect the data from a disaster, and only properly safeguarded backups can offer you peace of mind here. "But, we have a cluster...we don't need backups" Sadly I've heard this line more than I would have liked to. You need to understand that a cluster is comprised of shared storage, and that is precisely your single point of failure. A cluster will protect you from an issue at the Operating System level, and also under an outage of any SQL-related service or dependent devices. But it will most definitely NOT protect you against corruption, nor will it protect you against somebody deleting data from a table - accidentally or otherwise. Backup, fine. How often do I take a backup? The answer to this is something you will hear frequently when working with databases: it depends. What does it depend on? For one, you need to understand how much data your business is willing to lose. This is what's called Recovery Point Objective, or RPO. If you don't know how much data your business is willing to lose, you need to have an honest and realistic conversation about data loss expectations with your customers, internal or external. From my experience, their first answer to the question "how much data loss can you withstand?" will be "zero". In that case, you will need to explain how zero data loss is very difficult and very costly to achieve, even in today's computing environments. Do you want to go ahead and take full backups of all your databases every hour, or even every day? Probably not, because of the impact that taking a full backup can have on a system. That's what differential and transaction log backups are for. Have I answered the question of how often to take a backup? No, and I did that on purpose. You need to think about how much time you have to recover from any event that requires you to restore your databases. This is what's called Recovery Time Objective. Again, if you go ask your customer how long of an outage they can withstand, at first you will get a completely unrealistic number - and that will be your starting point for discussing a solution that is cost effective. The point that I'm trying to get across is that you need to have a plan. This plan needs to be practiced, and tested. Like a football playbook, you need to rehearse the moves you'll perform when the time comes. How often is up to you, and the objective is that you feel better about yourself and the steps you need to follow when emergency strikes. A backup is nothing more than an untested restore Backups are files. Files are prone to corruption. Put those two together and realize how you feel about those backups sitting on that network drive. When was the last time you restored any of those? Restoring your backups on another box - that, by the way, doesn't have to match the specs of your production server - will give you two things: 1) peace of mind, because now you know that your backups are good and 2) a place to offload your consistency checks with DBCC CHECKDB or any of the other DBCC commands like CHECKTABLE or CHECKCATALOG. This is a great strategy for VLDBs that cannot withstand the additional load created by the consistency checks. If you choose to offload your consistency checks to another server though, be sure to run DBCC CHECKDB WITH PHYSICALONLY on the production server, and if you're using SQL Server 2008 R2 SP1 CU4 and above, be sure to enable traceflags 2562 and/or 2549, which will speed up the PHYSICALONLY checks further - you can read more about this enhancement here. Back to the "How Often" question for a second. If you have the disk, and the network latency, and the system resources to do so, why not backup the transaction log often? As in, every 5 minutes, or even less than that? There's not much downside to doing it, as you will have to clear the log with a backup sooner than later, lest you risk running out space on your tlog, or even your drive. The one drawback to this approach is that you will have more files to deal with at restore time, and processing each file will add a bit of extra time to the entire process. But it might be worth that time knowing that you minimized the amount of data lost. Again, test your plan to make sure that it matches your particular needs. Where to back up to? Network share? Locally? SAN volume? This is another topic where everybody has a favorite choice. So, I'll stick to mentioning what I like to do and what I consider to be the best practice in this regard. I like to backup to a SAN volume, i.e., a drive that actually lives in the SAN, and can be easily attached to another server in a pinch, saving you valuable time - you wouldn't need to restore files on the network (slow) or pull out drives out a dead server (been there, done that, it’s also slow!). The key is to have a copy of those backup files made quickly, and, if at all possible, to a remote target on a different datacenter - or even the cloud. There are plenty of solutions out there that can help you put such a solution together. That right there is the first step towards a practical Disaster Recovery plan. But there's much more to DR, and that's material for a different blog post in this series.

    Read the article

  • What is your most productive shortcut with Vim?

    - by Olivier Pons
    I've heard a lot about Vim, both pros and cons. It really seems you should be (as a developer) faster with Vim than with any other editor. I'm using Vim to do some basic stuff and I'm at best 10 times less productive with Vim. The only two things you should care about when you talk about speed (you may not care enough about them, but you should) are: Using alternatively left and right hands is the fastest way to use the keyboard. Never touching the mouse is the second way to be as fast as possible. It takes ages for you to move your hand, grab the mouse, move it, and bring it back to the keyboard (and you often have to look at the keyboard to be sure you returned your hand properly to the right place) Here are two examples demonstrating why I'm far less productive with Vim. Copy/Cut & paste. I do it all the time. With all the classical editors you press Shift with the left hand, and you move the cursor with your right hand to select text. Then Ctrl+C copies, you move the cursor and Ctrl+V pastes. With Vim it's horrible: yy to copy one line (you almost never want the whole line!) [number xx]yy to copy xx lines into the buffer. But you never know exactly if you've selected what you wanted. I often have to do [number xx]dd then u to undo! Another example? Search & replace. In PSPad: Ctrl+f then type what you want you search for, then press Enter. In Vim: /, then type what you want to search for, then if there are some special characters put \ before each special character, then press Enter. And everything with Vim is like that: it seems I don't know how to handle it the right way. NB : I've already read the Vim cheat sheet :) My question is: What is the way you use Vim that makes you more productive than with a classical editor?

    Read the article

  • Replacing IFrame with div

    - by Roland
    I have a IFrame where I load in a custom search, and display the results within the iframe. The search results I obtain by calling an external url, that returns a value. I need to implement the same thing for a mobi site that works on mobile devices, and thus I need to replace the IFrame with something else. Will this be possible using a div tag, since most mobile devices do not support frames. And no javascript may be used. Any advice will be appreciated.

    Read the article

  • Looking for an extended version of Lightbox

    - by itsandy
    Hi All, I am not sure how to put this, maybe I'm not able to search it properly I'm not sure what is it called but I am looking for a script which is a kind of an extended version of lightbox script. I want to place some images in my website which when clicked opens as lightbox and can go next and previous but the trick is the next images have to be sub pics of the pic which is displayed. So lets say I have "a" "b" "c" ....images shown on my website but when some clicks "a" the image "a" opens but then when he clicks next image {{ with the help of the lightbox script }} he goes to "a.1" "a.2" ....and so on for image "b" ...... Can anyone help me finding this script I have seen i somewhere bu not sure of the search term. Many Thanks

    Read the article

< Previous Page | 444 445 446 447 448 449 450 451 452 453 454 455  | Next Page >