Search Results

Search found 68614 results on 2745 pages for 'full set arguments'.

Page 78/2745 | < Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >

  • Microsoft Standalone CA - Set expiration date of an individual request

    - by Hall72215
    I have set up a Microsoft Standalone CA on 2008 R2 as a root CA. I'm trying to setup a subordinate Enterprise CA. I generated the certificate request, and submitted it to the root CA. Then, I ran the following command to set the expiration date to 20 years (the request ID is 5): certutil -setattributes 5 "ValidityPeriod:Years\nValidityPeriodUnits:20" Then, I approved the request, but it failed. The Request Status Code is: The specified time is invalid. 0x8007076d (WIN32: 1901) The Request Disposition Message is: Denied by Policy Module 0x8007076d, The requested validity period is invalid. Confirm that the validity period or expiration data and time specified in the request does not extend beyond the validity period of the CA certificate, the certificate template, and the CA. The validity period of the CA can be verified by running the following commands: certutil -getreg ca\validityPeriod & certutil -getreg ca\ValidityPeriodUnits The validity period of the CA certificate is 40 years (expires in 2052). The template condition doesn't apply since this is a standalone CA. The result of those commands is Years and 1, respectively. It appears that I will need to change the CA's validityPeriod and validityPeriodUnits. But, I want to keep the default expiration for a request at 1 year. Is there a way to set a maximum and default expiration, or am I going to have to change it, issue the certificate, and then change it back?

    Read the article

  • How to set up the CNAME in DNS zone record to work with Unbounce

    - by Lirik
    I'm trying to run split testing on some landing pages I "designed" with Unbounce, but it requires that I set the CNAME record for my domain/sub-domain and I'm having trouble figuring out what is the right way to do it. My host is arvixe (www.arvixe.com) and their customer support has failed to help me for the past 5 days (I spoke to them multiple times). I followed the directions for setting the CNAME record and I was able to set the CNAME record, but I'm consistently unable to verify that the CNAME record is set up correctly. I followed the instructions on Unbounce to verify the CNAME record for my sub-domain (beta.devboost.com) and here are the results: No records found reverse lookup smtp diag port scan blacklist Reported by ns1.SNARE.arvixe.com on Thursday, November 10, 2011 at 5:49:57 PM (GMT-6) Here is my DNS zone record from the control panel of my host (last record, CNAME unbouncepages.com): Is there something wrong with my DNS Zone Record? What's the right way to do this? Update: I also have a CNAME record for beta in my root domain (devboost.com): I've updated my sub-domain record now: I've removed most of the other DNS records and I've removed the beta label for the CNAME record: Is that correct? Is there anything else I need to do?

    Read the article

  • Header set Access-Control-Allow-Origin not working with mod_rewrite + mod_jk

    - by tharant
    My first question on here on SF so please forgive me if I manage to bork the post. :) Anyways, I'm using mod_rewrite on one of my machines with a simple rule that redirects to a webapp on another machine. I'm also setting the header 'Access-Control-Allow-Origin' on both machines. The problem is that when I hit the rewrite rule, I loose the 'Access-Control-Allow-Origin' header setting. Here's an example of the Apache config for the first machine: NameVirtualHost 10.0.0.2:80 <VirtualHost 10.0.0.2:80> DocumentRoot /var/www/host.example.com ServerName host.example.com JkMount /webapp/* jkworker Header set Access-Control-Allow-Origin "*" RewriteEngine on RewriteRule ^/otherhost http://otherhost.example.com/webapp [R,L] </VirtualHost> And here's an example of the Apache config for the second: NameVirtualHost 10.0.1.2:80 <VirtualHost 10.0.1.2:80> DocumentRoot /var/www/otherhost.example.com ServerName otherhost.example.com JkMount /webapp/* jkworker Header set Access-Control-Allow-Origin "*" </VirtualHost> When I hit host.example.com we see that the header is set: $ curl -i http://host.example.com/ HTTP/1.1 302 Moved Temporarily Server: Apache/2.2.11 (FreeBSD) mod_ssl/2.2.11 OpenSSL/0.9.7e-p1 DAV/2 mod_jk/1.2.26 Content-Length: 0 Access-Control-Allow-Origin: * Content-Type: text/html;charset=ISO-8859-1 And when I hit otherhost.example.com we see that it too is setting the header: $ curl -i http://otherhost.example.com HTTP/1.1 200 OK Server: Apache/2.0.46 (Red Hat) Location: http://otherhost.example.com/index.htm Content-Length: 0 Access-Control-Allow-Origin: * Content-Type: text/html;charset=UTF-8 But when I try to hit the rewrite rule at host.example.com/otherhost we get no love: $ curl -i http://host.example.com/otherhost/ HTTP/1.1 302 Found Server: Apache/2.2.11 (FreeBSD) mod_ssl/2.2.11 OpenSSL/0.9.7e-p1 DAV/2 mod_jk/1.2.26 Location: http://otherhost.example.com/ Content-Length: 0 Content-Type: text/html; charset=iso-8859-1 Can anybody point out what I'm doing wrong here? Could mod_jk be part of the problem?

    Read the article

  • .htaccess - RewriteRules working, but browser address bar displaying full (unfriendly) URL

    - by axol
    Hey there, Haven't been able to find a solution to this around the net or these forums - apologise if I've missed something! My .htaccess RewriteRules are working well - have search-engine and user -friendly links in my web pages, and unfriendly database URLs running hidden in the background. Except when I added a RewriteRule to add "www." to the front of URLs if the user didn't enter it - to ensure only one appears in search engines. Here's what now happening, and I can't figure out why! My friendly URL structure for content is like this, and the database query string uses the first "importantword": www.example.com/importantword-nonimportantword/ .htaccess snippet: Options +FollowSymLinks Options -Indexes RewriteEngine on RewriteOptions MaxRedirects=10 RewriteBase / RewriteRule ^/$ index.php [L] RewriteRule ^(.*)-(.*)/overview/$ detail.php?categoryID=$1 [L] RewriteCond %{HTTP_HOST} !^www.example.com$ RewriteRule ^(.*)$ http://www.example.com/$1 [L] What's happening since I added the last 2 lines: CASE 1: user types (or clicks) www.example.com/honda-vehicle/overview/ - Works correctly - They are taken to the correct page and the browser URL bar says: www.example.com/honda-vehicles/overview/ CASE 2: user types example.com - Works correctly - They are taken to www.example.com and the browser URL bar says: www.example.com CASE 3: user types (or clicks) example.com/honda-vehicles/overview/ i.e. without the prefix "www" - Does NOT work correctly - They are taken to the right page, but the browser URL bar displays the unfriendly URL: www.example.com/detail.php?categoryID=honda I figure there's some issue with the order of the RewriteRules, but it's doing my head in trying to logically step through it and figure it out! Any assistance at all or pointers would be most appreciated!

    Read the article

  • How to use a specific PDF IFilter

    - by dthrasher
    I'm trying to extract text from PDF files using an iFilter. The Adobe PDF iFilter that is distributed with Adobe Reader is awful, returning HRESULT E_FAIL messages for many PDF documents. The FoxIt PDF IFilter works beautifully on virtually all of the PDFs I've been using for testing. The problem is that every time the Adobe Updater runs, it replaces the awesome FoxIt IFilter with the crappy Adobe IFilter. I've been using the LoadIFilter method to get the registered IFilter for PDF files. Is there a way to force the Win32 API to load the FoxIt IFilter instead of the Adobe IFilter? NOTE: This question about determining which IFilters are installed asks a related -- but not identical -- question.

    Read the article

  • JMeter to grab full querystring into a variable for future use

    - by jfbauer
    Someone provided me the regex to parse out a query string: (?<=\?)[^?]+$ I am trying to use that in JMeter with no luck (although I am successful in pulling out individual query string parameter values based on various example postings on the web). I created a regular expression extractor called "Grab QueryString". I selected the URL response field to check. For the reference name, I typed "myQueryString". For the regular expression, I entered your text. For template, I entered $1$ Match no = 1 Default Value = ERROR Unfortunately, "myQueryString" is getting populated with ERROR and not the URL query string as hoped when I try and use it as a parameter in a future GET. Thus, I see this in the "View Results Tree": https:/www.website.com/folder/page.aspx?ERROR Instead of: https:/www.website.com/folder/page.aspx?jfhjHSDjgdjhsjhsdhjSJHWed Did I do something wrong? Anyone have any suggestions?

    Read the article

  • Set JENKINS_HOME in Tomcat7?

    - by C. Ross
    I'm trying to set up Jenkins in Tomcat7 on Ubuntu. I installed Tomcat7 and deployed jenkins.war, and I now see the Jenkins home page at http://myhost:8080/jenkins, but it's attempting to create the Jenkins directory at /usr/share/tomcat7/.jenkins, which it can't for security reasons. I've already created /srv/jenkins and given the tomcat7 group permissions, and want to set JENKINS_HOME to that path. I've tried adding it to the tomcat configuration in /etc/tomcat7/server.xml: <GlobalNamingResources> <Environment name="JENKINS_HOME" value="/srv/jenkins" type="java.lang.String" override="false"/> <!-- Default settings --> And I've also tried adding it to the automatically created context file in ROOT/META-INF/context.xml (there is no $CATALINA_HOME/conf as far as I can tell). <Context path="/" antiResourceLocking="false" > <Environment name="JENKINS_HOME" value="/srv/jenkins/" type="java.lang.String"/> </Context> But even after restarting tomcat7 I still get the same result (trying to use /usr/share/tomcat7/.jenkins). Where do I need to set the environment variable for JENKINS_HOME in Tomcat7?

    Read the article

  • Full JSON-RPC specifications

    - by Artyom
    Hello, I'm going to implement JSON-PRC web service. I need specifications for this. So far I had found only one resource that can be called as real specifications: JSON-RPC 1.0 http://json-rpc.org/wiki/specification Proposal of JSON-PRC 2.0: http://groups.google.com/group/json-rpc/web/json-rpc-2-0 (why is it on google groups?) However I've seen that JavaScript frameworks like Dojo actively use JSON-RPC SMD Service Mapping Description proposal But it requires JSON Schema specifications, but it redirects to incorrect URL as reference. So far I had found following: http://tools.ietf.org/html/draft-zyp-json-schema-02 And it is still draft... Can anybody point me to spome actual specifications... At least something official updated? Because it looks like that implementing JSON-RPC 1.0 and 2.0 would not be enought, at least for frameworks like Dojo. Or am I wrong? Questions: Is it enough to implement JSON-RPC 1.0 specifications and 2.0 draft to be on safe side, would this work for most JSON-RPC clients? If I should implement SMD, or it is recommended can somebody point to official specifications of Json Schema and Service Mapping Description or links I found are really "specifications?" Note: do not suggest existing JSON-RPC service implementations.

    Read the article

  • Sphinx PHP search

    - by James
    I'm doing a Sphinx search but turning up some really weird results. Any help is appreciated. So for example if I type "50", I get: 50 Cent 50 Lions 50 Foot Wave, etc. This is great, but when I search "50 Ce", I get: Ryczace Dwudziestki Spisek Bernhard Gal Cowabunga Go-Go And other crazy results. Also when I search for "50 Cent", the correct result is at the top, but then random results below. Any ideas why? PHP code: $query = $_GET['query']; if (!empty($query)) { $sphinx->SetMatchMode(SPH_MATCH_ALL); $sphinx->AddQuery($query, 'artists'); $sphinx->AddQuery($query, 'variations'); $sphinx->SetFilter('name', array(3)); $sphinx->SetLimits(0, 10); $result = $sphinx->RunQueries(); echo '<pre>'; switch ($result) { case false: echo 'Query failed: ' . $sphinx->GetLastError() . "\n"; break; default: if ($sphinx->GetLastWarning()) { echo 'WARNING: ' . $sphinx->GetLastWarning() . "\n"; } if (is_array($result[0]['matches']) && count($result[0]['matches'])) { foreach ($result[0]['matches'] as $value => $info) { $artist = artistDetails($value); echo $artist['name'] . "\n"; } } } } Sphinx Index and Source: source artists { type = mysql sql_host = localhost sql_user = user sql_pass = pass sql_db = db sql_port = 3300 sql_query = \ SELECT \ id, name \ FROM artists; #UNIX_TIMESTAMP(time) #sql_attr_uint = group_id #sql_attr_timestamp = time sql_query_info = SELECT id,name FROM artists WHERE id=$id } index artists { source = artists path = /var/sphinx/artists docinfo = extern charset_type = utf-8 }

    Read the article

  • Indexing CSV file contents in Python

    - by Hossein
    Hi, I have a very large CSV file contaning only two fields (id,url). I want to do some indexing on the url field with python, I know that there are some tools like Whoosh or Pylucene. but I can't get the examples to work. can someone help me with this?

    Read the article

  • Can't setup 3 nodes MongoDB recplica set

    - by Victor Lin
    I just follow instructions in MongoDB document Replica Sets - Basics to setup a 3-node Replica set. Everything goes fine when I do the initiate and add first node in the primary. [foo@host-a mongodb]$ bin/mongo localhost MongoDB shell version: 1.8.2 connecting to: localhost > rs.initiate() { "info2" : "no configuration explicitly specified -- making one", "info" : "Config now saved locally. Should come online in about a minute.", "ok" : 1 } > rs.add("host-b") { "ok" : 1 } So far so good, but when I try to add third node myset:PRIMARY> rs.addArb("host-c") Sun Aug 7 22:57:09 MessagingPort recv() errno:104 Connection reset by peer 127.0.0.1:27017 Sun Aug 7 22:57:09 SocketException: remote: error: 9001 socket exception [1] Sun Aug 7 22:57:09 DBClientCursor::init call() failed Sun Aug 7 22:57:09 query failed : local.$cmd { count: "system.replset", query: {}, fields: {} } to: 127.0.0.1 Sun Aug 7 22:57:09 Error: error doing query: failed shell/collection.js:150 Sun Aug 7 22:57:09 trying reconnect to 127.0.0.1 Sun Aug 7 22:57:09 reconnect 127.0.0.1 ok As result, the current primary became secondary, and the host-b was marked as dead, but actually, it is still alive. myset:SECONDARY> rs.status() { "set" : "myset", "date" : ISODate("2011-08-08T04:03:23Z"), "myState" : 2, "members" : [ { "_id" : 0, "name" : "host-a:27017", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "optime" : { "t" : 1312775799000, "i" : 1 }, "optimeDate" : ISODate("2011-08-08T03:56:39Z"), "self" : true }, { "_id" : 1, "name" : "host-b", "health" : 0, "state" : 6, "stateStr" : "(not reachable/healthy)", "uptime" : 0, "optime" : { "t" : 0, "i" : 0 }, "optimeDate" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeat" : ISODate("2011-08-08T04:03:22Z"), "errmsg" : "still initializing" } ], "ok" : 1 } How could this happen? I just follow the guide in the document, did I do something wrong? Moreover, I can't do anything on current secondary server. It doesn't allow me to reconfig on the secondary node, but the problem is there is no primary node. myset:SECONDARY> rs.reconfig({}) { "errmsg" : "replSetReconfig command must be sent to the current replica set primary.", "ok" : 0 } Any ideas?

    Read the article

  • Submit HTML form with GET method with full action path

    - by Manny Calavera
    Hello. I am trying to submit a form with method GET and action "index.php?id=3". The problem is that it jumps to the url specified but it cuts off "?id=3" part. I need that so I can identify an user. Any ideas on how I could submit the whole url ? I don't want to change this method, by design is much complex than I told you here. It's a simple version. Any ideas ? Thanks.

    Read the article

  • Prevent full table scan for query with multiple where clauses

    - by Dave Jarvis
    A while ago I posted a message about optimizing a query in MySQL. I have since ported the data and query to PostgreSQL, but now PostgreSQL has the same problem. The solution in MySQL was to force the optimizer to not optimize using STRAIGHT_JOIN. PostgreSQL offers no such option. Here is the explain: Here is the query: SELECT avg(d.amount) AS amount, y.year FROM station s, station_district sd, year_ref y, month_ref m, daily d LEFT JOIN city c ON c.id = 10663 WHERE -- Find all the stations within a specific unit radius ... -- 6371.009 * SQRT( POW(RADIANS(c.latitude_decimal - s.latitude_decimal), 2) + (COS(RADIANS(c.latitude_decimal + s.latitude_decimal) / 2) * POW(RADIANS(c.longitude_decimal - s.longitude_decimal), 2)) ) <= 50 AND -- Ignore stations outside the given elevations -- s.elevation BETWEEN 0 AND 2000 AND sd.id = s.station_district_id AND -- Gather all known years for that station ... -- y.station_district_id = sd.id AND -- The data before 1900 is shaky; insufficient after 2009. -- y.year BETWEEN 1980 AND 2000 AND -- Filtered by all known months ... -- m.year_ref_id = y.id AND m.month = 12 AND -- Whittled down by category ... -- m.category_id = '001' AND -- Into the valid daily climate data. -- m.id = d.month_ref_id AND d.daily_flag_id <> 'M' GROUP BY y.year It appears as though PostgreSQL is looking at the DAILY table first, which is simply not the right way to go about this query as there are nearly 300 million rows. How do I force PostgreSQL to start at the CITY table? Thank you!

    Read the article

  • Full page border | CSS

    - by Wayne
    Isn't there a CSS way of having the page to get a border around the page, even if the content was not big enough for the page to scroll and there's still a border around the page. I think I remember I saw one CSS method before was like something something, I don't know, do you know? lol If you know, many thanks :)

    Read the article

  • Can AVAudioSession do full duplex?

    - by Eric Christensen
    It would seem like it should be able to, but the following breakout test code can't do both: //play a file: NSArray *pathsArray = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString *documentsDirectory = [pathsArray objectAtIndex:0]; NSString* playFilePath=[documentsDirectory stringByAppendingPathComponent:@"testplayfile.wav"]; AVAudioPlayer *tempplayer = [[AVAudioPlayer alloc] initWithContentsOfURL:[NSURL fileURLWithPath:playFilePath] error:nil]; [tempplayer prepareToPlay]; [tempplayer play]; //and record a file: NSString* recFilePath=[documentsDirectory stringByAppendingPathComponent:@"testrecordfile.wav"]; AVAudioRecording *soundrecording = [[AVAudioRecorder alloc] initWithURL:[NSURL fileURLWithPath:recFilePath] settings:nil error:nil]; [soundrecording prepareToRecord]; [soundrecording record]; This is the minimum I can think of to individually play one file and record another. And this works just fine in the simulator. I can play back a file and record at the same time. But it doesn't work on the iphone itself. If I comment out either function, the other performs fine. The playback plays fine either alone or with both, if it's first. If I comment out the playback, the record records fine. (There's additional code to stop the recording not shown here.) So each works fine, but not together. I know audioQueue has a setting to allow both, but I don't see an analogue for AVAudioSessions. Any idea if it's possible, and if so, what I need to add? Thanks!

    Read the article

  • How Do I Programmatically Set a File Tag

    - by Greg Bishop
    When using Windows Explorer to view files, I'm given the option to set a "tag", "category", or other attributes. For a JPEG a different set of attributes (including "tag") are options. I'd like to be able to set these programmatically. How do I programmatically set a file tag and other file attributes using Delphi (I have Delphi 2010 Pro)?

    Read the article

  • SqlBulkCopy is slow, doesn't utilize full network speed

    - by Alex
    Hi, for that past couple of weeks I have been creating generic script that is able to copy databases. The goal is to be able to specify any database on some server and copy it to some other location, and it should only copy the specified content. The exact content to be copied over is specified in a configuration file. This script is going to be used on some 10 different databases and run weekly. And in the end we are copying only about 3%-20% of databases which are as large as 500GB. I have been using the SMO assemblies to achieve this. This is my first time working with SMO and it took a while to create generic way to copy the schema objects, filegroups ...etc. (Actually helped find some bad stored procs). Overall I have a working script which is lacking on performance (and at times times out) and was hoping you guys would be able to help. When executing the WriteToServer command to copy large amount of data ( 6GB) it reaches my timeout period of 1hr. Here is the core code for copying table data. The script is written in PowerShell. $query = ("SELECT * FROM $selectedTable " + $global:selectiveTables.Get_Item($selectedTable)).Trim() Write-LogOutput "Copying $selectedTable : '$query'" $cmd = New-Object Data.SqlClient.SqlCommand -argumentList $query, $source $cmd.CommandTimeout = 120; $bulkData = ([Data.SqlClient.SqlBulkCopy]$destination) $bulkData.DestinationTableName = $selectedTable; $bulkData.BulkCopyTimeout = $global:tableCopyDataTimeout # = 3600 $reader = $cmd.ExecuteReader(); $bulkData.WriteToServer($reader); # Takes forever here on large tables The source and target databases are located on different servers so I kept track of the network speed as well. The network utilization never went over 1% which was quite surprising to me. But when I just transfer some large files between the servers, the network utilization spikes up to 10%. I have tried setting the $bulkData.BatchSize to 5000 but nothing really changed. Increasing the BulkCopyTimeout to an even greater amount would only solve the timeout. I really would like to know why the network is not being used fully. Anyone else had this problem? Any suggestions on networking or bulk copy will be appreciated. And please let me know if you need more information. Thanks. UPDATE I have tweaked several options that increase the performance of SqlBulkCopy, such as setting the transaction logging to simple and providing a table lock to SqlBulkCopy instead of the default row lock. Also some tables are better optimized for certain batch sizes. Overall, the duration of the copy was decreased by some 15%. And what we will do is execute the copy of each database simultaneously on different servers. But I am still having a timeout issue when copying one of the databases. When copying one of the larger databases, there is a table for which I consistently get the following exception: System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. It is thrown about 16 after it starts copying the table which is no where near my BulkCopyTimeout. Even though I get the exception that table is fully copied in the end. Also, if I truncate that table and restart my process for that table only, the tables is copied over without any issues. But going through the process of copying that entire database fails always for that one table. I have tried executing the entire process and reseting the connection before copying that faulty table, but it still errored out. My SqlBulkCopy and Reader are closed after each table. Any suggestions as to what else could be causing the script to fail at the point each time?

    Read the article

  • Silverlight 4 - elevated permission *inside* the browser

    - by Doug
    I know Silverlight 4 can handle elevated permissions outside the browser. Is there a way to accomplish this inside the browser? I need to make a folder/file upload manager that gives a better user experience than the standard , and I'd like to implement it in Silverlight. I know Java has an option to gain elevated permissions, but you have to attach a signed certificate to your app. Does Silverlight 4 have a similar option - to gain elevated permissions by attaching a signed certificate (after warning the user, of course)? -Doug

    Read the article

  • jquery full calendar

    - by madrat
    hello everybody. I am new to fullCalendar and i do not kno how to use gotoDate method. What i have is a tabbed page with 12 tabs (every tab is a month of the year). What i need is when i load jquery calendar, foreach tab, the calendar to switch to the month that is assigned. (e.g when i click and load page for January the calendar shows days for january, and so on). thanks for your help.

    Read the article

  • Timeout reading verity collection - CF8

    - by Gary
    For a long time now I've been having a problem with using the verity search service bundled with ColdFusion 8. The issue is with timeout errors occurring when perfoming any operation on a collection. It's intermittent, and usually occurs after a few operations have been successfully performed. For instance: If I'm adding records to a collection the first, say 15 records, will go through with no problems, but all subsequent records will timeout until the service is rebooted. I'm on a shared server, Windows 2008, 64bit as far as I know. The error I receive is: "An error occurred while performing an operation in the Search Engine library. Error reading collection information.: com.verity.api.administration.ConfigurationException: java.io.IOException: Read timed out" Having spoken to my hosting company, and after doing some research, it's been suggested that the number of collections on a server may cause this issue. I've reduced the amount of collections I use, and there are currently 39 collections on the server. As I'm on a shared server, I have no control over how many collections other customers use, however I've read that the limit is 128 collections, so I don't see why 39 should cause it to become unusable. The collections aren't big, there's maybe around 5,000 records between all of them. Any ideas?

    Read the article

  • Using IVMRWindowlessControl to display video in a Winforms Control and allow for full screen toggle

    - by Jonathan Websdale
    I've recently switched from using the IVideoWindow interface to IVMRWindowlessControl in my custom Winforms control to display video. The reason for this was to allow zoom capabilities on the video within the control. However in switching over, I've found that the FullScreen mode from IVideoWindow is not available and I am currently trying to replicate this using the SetVideoWindow() method. I'm finding that I size the video in my control to be at the same resolution as the screen however I can't get the control to position itself to the top/left of the screen and become the top most window. Any ideas on how to achieve this since the IVideoWindow::put_FullScreenMode just did it all for you?

    Read the article

  • How to optimize Core Data query for full text search

    - by dk
    Can I optimize a Core Data query when searching for matching words in a text? (This question also pertains to the wisdom of custom SQL versus Core Data on an iPhone.) I'm working on a new (iPhone) app that is a handheld reference tool for a scientific database. The main interface is a standard searchable table view and I want as-you-type response as the user types new words. Words matches must be prefixes of words in the text. The text is composed of 100,000s of words. In my prototype I coded SQL directly. I created a separate "words" table containing every word in the text fields of the main entity. I indexed words and performed searches along the lines of SELECT id, * FROM textTable JOIN (SELECT DISTINCT textTableId FROM words WHERE word BETWEEN 'foo' AND 'fooz' ) ON id=textTableId LIMIT 50 This runs very fast. Using an IN would probably work just as well, i.e. SELECT * FROM textTable WHERE id IN (SELECT textTableId FROM words WHERE word BETWEEN 'foo' AND 'fooz' ) LIMIT 50 The LIMIT is crucial and allows me to display results quickly. I notify the user that there are too many to display if the limit is reached. This is kludgy. I've spent the last several days pondering the advantages of moving to Core Data, but I worry about the lack of control in the schema, indexing, and querying for an important query. Theoretically an NSPredicate of textField MATCHES '.*\bfoo.*' would just work, but I'm sure it will be slow. This sort of text search seems so common that I wonder what is the usual attack? Would you create a words entity as I did above and use a predicate of "word BEGINSWITH 'foo'"? Will that work as fast as my prototype? Will Core Data automatically create the right indexes? I can't find any explicit means of advising the persistent store about indexes. I see some nice advantages of Core Data in my iPhone app. The faulting and other memory considerations allow for efficient database retrievals for tableview queries without setting arbitrary limits. The object graph management allows me to easily traverse entities without writing lots of SQL. Migration features will be nice in the future. On the other hand, in a limited resource environment (iPhone) I worry that an automatically generated database will be bloated with metadata, unnecessary inverse relationships, inefficient attribute datatypes, etc. Should I dive in or proceed with caution?

    Read the article

< Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >