Search Results

Search found 4631 results on 186 pages for 'scan conversion'.

Page 64/186 | < Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >

  • Best approach to depth streaming via existing codec

    - by Kevin
    I'm working on a development system (and game) intended for games set mostly in static third-person views. We produce our scenery by CG and photographic techniques. Our background art is rendered off-line by a production-grade renderer. To allow the runtime imagery to properly interact with the background art, I wrote a program to convert from depth output by Mental Ray into a texture, and a pixel shader to draw a quad such that the Z data comes from the texture. This technique is working out very well, but now we've decided that some of the camera angle changes between scenes should be animated. The animation itself is straightforward to produce from our CG models. We intend to encode it to some HD video codec such as H.264. The problem is that in order to maintain our runtime imagery on the screen, the depth buffer will need to be loaded for each video frame. Due to the bandwidth, the video's depth data will need to be compressed efficiently. I've looked into methods for performing temporal compression of depth info and found an interesting research paper here: http://web4.cs.ucl.ac.uk/staff/j.kautz/publications/depth-streaming.pdf The method establishes a mapping between 16-bit depth values and YCbCr values. The mapping is tuned to the properties of existing video codecs in order to maximize precision of the decoded depths after the YCbCr has undergone video compression. It allows an existing, unmodified video codec to be used on the backend. I'm looking at how to pull this off with the least possible work. (This design change was unplanned.) Our game engine itself is native C++, presently for Win32 and DirectX, although we've worked hard to keep platform dependence segregated because we intend other ports. We don't have motion video facilities in the engine yet but will ultimately need that anyway for cinematics. I was planning on using some off-the-shelf motion video solution we can plug into our engine, and haven't chosen one yet. This new added requirement makes selecting one harder since, among other things, we'll now need to bypass colourspace conversion on one of the streams, and also will need to be playing two streams simultaneously in lockstep, on top of in some cases audio on one of them (for the cinematics). I'm also wondering if it's possible (or even useful) to do the conversion from YCbCr to depth in a pixel shader, or if it's better to just do it in CPU and separately load the resulting depth values into a locked tex. The conversion unfortunately does involve branching logic per-pixel. (There are more naive mappings that don't need branching, but they produce inferior results.) It could be reduced to a table lookup but the table would be 32MB. Programming is second-nature to me but I'm not that experienced with pix shaders and have zero knowledge of off-the-shelf video solutions. I'd therefore be interested in advice from others who may have dealt more with depth streaming, pixel shaders, and/or off-the-shelf codecs, regarding how feasible the proposed application is and what off-the-shelf video systems out there would best get along with this usage case.

    Read the article

  • Increase Query Speed in PostgreSQL

    - by Anthoni Gardner
    Hello, First time posting here, but an avid reader. I am experiancing slow query times on my database (all tested locally thus far) and not sure how to go about it. The database itself has 44 tables and some of them tables have over 1 Million records (mainly the movies, actresses and actors tables). The table is made via JMDB using the flat files on IMDB. Also the SQL query that I am about to show is from that said program (that too experiances very slow search times). I have tried to include as much information as I can, such as the explain plan etc. "QUERY PLAN" "HashAggregate (cost=46492.52..46493.50 rows=98 width=46)" " Output: public.movies.title, public.movies.movieid, public.movies.year" " - Append (cost=39094.17..46491.79 rows=98 width=46)" " - HashAggregate (cost=39094.17..39094.87 rows=70 width=46)" " Output: public.movies.title, public.movies.movieid, public.movies.year" " - Seq Scan on movies (cost=0.00..39093.65 rows=70 width=46)" " Output: public.movies.title, public.movies.movieid, public.movies.year" " Filter: (((title)::text ~~* '%Babe%'::text) AND ((title)::text !~~* '""%}'::text))" " - Nested Loop (cost=0.00..7395.94 rows=28 width=46)" " Output: public.movies.title, public.movies.movieid, public.movies.year" " - Seq Scan on akatitles (cost=0.00..7159.24 rows=28 width=4)" " Output: akatitles.movieid, akatitles.language, akatitles.title, " Filter: (((title)::text ~~* '%Babe%'::text) AND ((title)::text !~~* '""%}'::text))" " - Index Scan using movies_pkey on movies (cost=0.00..8.44 rows=1 width=46)" " Output: public.movies.movieid, public.movies.title, public.movies.year, public.movies.imdbid" " Index Cond: (public.movies.movieid = akatitles.movieid)" SELECT * FROM ((SELECT DISTINCT title, movieid, year FROM movies WHERE title ILIKE '%Babe%' AND NOT (title ILIKE '"%}')) UNION (SELECT movies.title, movies.movieid, movies.year FROM movies INNER JOIN akatitles ON movies.movieid=akatitles.movieid WHERE akatitles.title ILIKE '%Babe%' AND NOT (akatitles.title ILIKE '"%}'))) AS union_tmp2; Returns 612 Rows in 9078ms Database backup (plain text) is 1.61GB It's a really complex query and I am not fully cognizant on it, like I said it was spat out by JMDB. Do you have any suggestions on how I can increase the speed ? Regards Anthoni

    Read the article

  • Character Encoding: â??

    - by akaphenom
    I am trying to piece together the mysterious string of characters â?? I am seeing quite a bit of in our database - I am fairly sure this is a result of conversion between character encodings, but I am not completely positive. The users are able to enter text (or cut and paste) into a Ext-Js rich text editor. The data is posted to a severlet which persists it to the database, and when I view it in the database i see those strange characters... is there any way to decode these back to their original meaning, if I was able to discover the correct encoding - or is there a loss of bits or bytes that has occured through the conversion process? Users are cutting and pasting from multiple versions of MS Word and PDF. Does the encoding follow where the user copied from? Thank you website is UTF-8 We are using ms sql server 2005; SELECT serverproperty('Collation') -- Server default collation. Latin1_General_CI_AS SELECT databasepropertyex('xxxx', 'Collation') -- Database default SQL_Latin1_General_CP1_CI_AS and the column: Column_name Type Computed Length Prec Scale Nullable TrimTrailingBlanks FixedLenNullInSource Collation text varchar no -1 yes no yes SQL_Latin1_General_CP1_CI_AS The non-Unicode equivalents of the nchar, nvarchar, and ntext data types in SQL Server 2000 are listed below. When Unicode data is inserted into one of these non-Unicode data type columns through a command string (otherwise known as a "language event"), SQL Server converts the data to the data type using the code page associated with the collation of the column. When a character cannot be represented on a code page, it is replaced by a question mark (?), indicating the data has been lost. Appearance of unexpected characters or question marks in your data indicates your data has been converted from Unicode to non-Unicode at some layer, and this conversion resulted in lost characters. So this may be the root cause of the problem... and not an easy one to solve on our end.

    Read the article

  • Design problem with callback functions in android

    - by Franz Xaver
    Hi folks! I'm currently developing an app in android that is accessing wifi values, that is, the application needs to scan for all access point and their specific signal strengths. I know that I have to extend the class BroadcastReceiver overwriting the method BroadcastReceiver.onReceive(Context context, Intent intent) which is called when the values are ready. Perhaps there exist solutions provided by the android system itself but I'm relatively new to android so I could need some help. The problem I encountered is that I got one class (an activity, thus controlled by the user) that needs this scan results for two different things (first to save the values in a database or second, to use them for further calculations but not both at one moment!) So how to design the callback system in order to "transport" the scan results from onReceive(Context context, Intent intent) to the operation intended by the user? My first solution was to define enums for each use case (save or use for calculations) which wlan-interested classes have to submit when querying for the values. But that would force the BroadcastReceiverextending class to save the current enum and use it as a parameter in the callback function of the querying class (this querying class needs to know what it asked for when getting backcalled) But that seems to me kind of dirty ;) So anyone a good idea for this?

    Read the article

  • Error Converting PIL B&W images to Numpy Arrays

    - by Elliot
    I am getting weird errors when I try to convert a black and white PIL image to a numpy array. An example of the code I am working with is below. if image.mode != '1': image = image.convert('1') #convert to B&W data = np.array(image) #convert data to a numpy array n_lines = data.shape[0] #number of raster passes line_range = range(data.shape[1]) for l in range(n_lines): # process one horizontal line of the image line = data[l] for n in line_range: if line[n] == 1: write_line_to(xl, z+scale*n, speed) #conversion to other program code elif line[n] == 0: run_to(xl, z+scale*n) #conversion to other program code I have tried this using both array and asarray for the conversion, and gotten different errors. If I use array, then the data I get out is nothing like what I put in. It looks like several very shrunken partial images side by side, with the remainder of the image space filled in in black. If I use asarray, then the entirety of python crashes during the raster step (on a random line). If I work with a greyscale image ('L'), then neither of these errors occurs for either array or asarray. Does anyone know what I am doing wrong? Is there something odd about the way PIL encodes B&W images, or something special I need to pass numpy to make it convert properly?

    Read the article

  • Weird bug with C++ lambda expressions in VS2010

    - by Andrei Tita
    In a couple of my projects, the following code: class SmallClass { public: int x1, y1; void TestFunc() { auto BadLambda = [&]() { int g = x1 + 1; //ok int h = y1 + 1; //c2296 int l = static_cast<int>(y1); //c2440 }; int y1_copy = y1; //it works if you create a local copy auto GoodLambda = [&]() { int h = y1_copy + 1; //ok int l = this->y1 + 1; //ok }; } }; generates error C2296: '+' : illegal, left operand has type 'double (__cdecl *)(double)' or alternatively error C2440: 'static_cast' : cannot convert from 'double (__cdecl *)(double)' to 'int' You get the picture. It also happens if catching by value. The error seems to be tied to the member name "y1". It happened in different classes, different projects and with (seemingly) any type for y1; for example, this code: [...] MyClass y1; void TestFunc() { auto BadLambda = [&]()->void { int l = static_cast<int>(y1); //c2440 }; } generates both these errors: error C2440: 'static_cast' : cannot convert from 'MyClass' to 'int' No user-defined-conversion operator available that can perform this conversion, or the operator cannot be called error C2440: 'static_cast' : cannot convert from 'double (__cdecl *)(double)' to 'int' There is no context in which this conversion is possible It didn't, however, happen in a completely new project. I thought maybe it was related to Lua (the projects where I managed to reproduce this bug both used Lua), but I did not manage to reproduce it in a new project linking Lua. It doesn't seem to be a known bug, and I'm at a loss. Any ideas as to why this happens? (I don't need a workaround; there are a few in the code already). Using Visual Studio 2010 Express version 10.0.40219.1 Sp1Rel.

    Read the article

  • Input string was not in correct format

    - by Luke
    using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.IO; namespace measurementConverter { class Program { static void Main(string[] args) { //read in the file StreamReader convert = new StreamReader("../../convert.txt"); //define variables string line = convert.ReadLine(); int conversion; int numberIn; float conversionFactor; Console.WriteLine("Enter the conversion in the form (amount,from,to)"); String inputMeasurement = Console.ReadLine(); string[] inputMeasurementArray = inputMeasurement.Split(','); while (line != null) { string[] fileMeasurementArray = line.Split(','); if (fileMeasurementArray[0] == inputMeasurementArray[1]) { if (fileMeasurementArray[1] == inputMeasurementArray[2]) { Console.WriteLine("{0}", fileMeasurementArray[2]); } } line = convert.ReadLine(); //convert to int numberIn = Convert.ToInt32(inputMeasurementArray[0]); conversionFactor = Convert.ToInt32(fileMeasurementArray[2]); conversion = (numberIn * conversionFactor); } Console.ReadKey(); } } } Hello, I am trying to get the calculating going. On the line conversionFactor = Convert.ToInt32(fileMeasurementArray[2]);, I am getting an error saying "Input string was not in correct format". Please help! The text file consists of the following: ounce,gram,28.0 pound,ounce,16.0 pound,kilogram,0.454 pint,litre,0.568 inch,centimetre,2.5 mile,inch,63360.0

    Read the article

  • WriteableBitmap failing badly, pixel array very inaccurate

    - by dawmail333
    I have tried, literally for hours, and I have not been able to budge this problem. I have a UserControl, that is 800x369, and it contains, simply, a path that forms a worldmap. I put this on a landscape page, then I render it into a WriteableBitmap. I then run a conversion to turn the 1d Pixels array into a 2d array of integers. Then, to check the conversion, I wire up the custom control's click command to use the Point.X and Point.Y relative to the custom control in the newly created array. My logic is thus: wb = new WriteableBitmap(worldMap, new TranslateTransform()); wb.Invalidate(); intTest = wb.Pixels.To2DArray(wb.PixelWidth); My conversion logic is as such: public static int[,] To2DArray(this int[] arr,int rowLength) { int[,] output = new int[rowLength, arr.Length / rowLength]; if (arr.Length % rowLength != 0) throw new IndexOutOfRangeException(); for (int i = 0; i < arr.Length; i++) { output[i % rowLength, i / rowLength] = arr[i]; } return output; } Now, when I do the checking, I get completely and utterly strange results: apparently all pixels are either at values of -1 or 0, and these values are completely independent of the original colours. Just for posterity: here's my checking code: private void Check(object sender, MouseButtonEventArgs e) { Point click = e.GetPosition(worldMap); ChangeNotification(intTest[(int)click.X,(int)click.Y].ToString()); } The result show absolutely no correlation to the path that the WriteableBitmap has rendered into it. The path has a fill of solid white. What the heck is going on? I've tried for hours with no luck. Please, this is the major problem stopping me from submitting my first WP7 app. Any guidance?

    Read the article

  • boost::python string-convertible properties

    - by Checkers
    I have a C++ class, which has the following methods: class Bar { ... const Foo& getFoo() const; void setFoo(const Foo&); }; where class Foo is convertible to std::string (it has an implicit constructor from std::string and an std::string cast operator). I define a Boost.Python wrapper class, which, among other things, defines a property based on previous two functions: class_<Bar>("Bar") ... .add_property( "foo", make_function( &Bar::getFoo, return_value_policy<return_by_value>()), &Bar::setFoo) ... I also mark the class as convertible to/from std::string. implicitly_convertible<std::string, Foo>(); implicitly_convertible<Foo, std::string>(); But at runtime I still get a conversion error trying to access this property: TypeError: No to_python (by-value) converter found for C++ type: Foo How to achieve the conversion without too much boilerplate of wrapper functions? (I already have all the conversion functions in class Foo, so duplication is undesirable.

    Read the article

  • Looking for Go equivalent of scanf.

    - by Stephen Hsu
    I'm looking for the Go equivalent of scanf(). I tried with following code: 1 package main 2 3 import ( 4 "scanner" 5 "os" 6 "fmt" 7 ) 8 9 func main() { 10 var s scanner.Scanner 11 s.Init(os.Stdin) 12 s.Mode = scanner.ScanInts 13 tok := s.Scan() 14 for tok != scanner.EOF { 15 fmt.Printf("%d ", tok) 16 tok = s.Scan() 17 } 18 fmt.Println() 19 } I run it with input from a text with a line of integers. But it always output -3 -3 ... And how to scan a line composed of a string and some integers? Changing the mode whenever encounter a new data type? The Package documentation: Package scanner A general-purpose scanner for UTF-8 encoded text. But it seems that the scanner is not for general use. Updated code: func main() { n := scanf() fmt.Println(n) fmt.Println(len(n)) } func scanf() []int { nums := new(vector.IntVector) reader := bufio.NewReader(os.Stdin) str, err := reader.ReadString('\n') for err != os.EOF { fields := strings.Fields(str) for _, f := range fields { i, _ := strconv.Atoi(f) nums.Push(i) } str, err = reader.ReadString('\n') } r := make([]int, nums.Len()) for i := 0; i < nums.Len(); i++ { r[i] = nums.At(i) } return r }

    Read the article

  • Update table using SSIS

    - by thursdaysgeek
    I am trying to update a field in a table with data from another table, based on a common key. If it were in straight SQL, it would be something like: Update EHSIT set e.IDMSObjID = s.IDMSObjID from EHSIT e, EHSIDMS s where e.SITENUM = s.SITE_CODE However, the two tables are not in the same database, so I'm trying to use SSIS to do the update. Oh, and the sitenum/site_code are varchar in one and nvarchar in the other, so I'll have to do a data conversion so they'll match. How do I do it? I have a data flow object, with the source as EHSIDMS and the destination as EHSIT. I have a data conversion to convert the unicode to non-unicode. But how do I update based on the match? I've tried with the destination, using a SQL Command as the Data Access mode, but it doesn't appear to have the source table. If I just map the field to be updated, how does it limit it based on fields matching? I'm about to export my source table to Excel or something, and then try inputting from there, although it seems that all that would get me would be to remove the data conversion step. Shouldn't there be an update data task or something? Is it one of those Data Flow transformation tasks, and I'm just not figuring out which it is?

    Read the article

  • Simple aggregating query very slow in PostgreSql, any way to improve?

    - by Ash
    HI I have a table which holds files and their types such as CREATE TABLE files ( id SERIAL PRIMARY KEY, name VARCHAR(255), filetype VARCHAR(255), ... ); and another table for holding file properties such as CREATE TABLE properties ( id SERIAL PRIMARY KEY, file_id INTEGER CONSTRAINT fk_files REFERENCES files(id), size INTEGER, ... // other property fields ); The file_id field has an index. The file table has around 800k lines, and the properties table around 200k (not all files necessarily have/need a properties). I want to do aggregating queries, for example find the average size and standard deviation for all file types. But it's very slow - around 70 seconds for the latter query. I understand it needs a sequential scan, but still it seems too much. Here's the query SELECT f.filetype, avg(size), stddev(size) FROM files as f, properties as pr WHERE f.id = pr.file_id GROUP BY f.filetype; and the explain HashAggregate (cost=140292.20..140293.94 rows=116 width=13) (actual time=74013.621..74013.954 rows=110 loops=1) -> Hash Join (cost=6780.19..138945.47 rows=179564 width=13) (actual time=1520.104..73156.531 rows=179499 loops=1) Hash Cond: (f.id = pr.file_id) -> Seq Scan on files f (cost=0.00..108365.41 rows=1140941 width=9) (actual time=0.998..62569.628 rows=805270 loops=1) -> Hash (cost=3658.64..3658.64 rows=179564 width=12) (actual time=1131.053..1131.053 rows=179499 loops=1) -> Seq Scan on properties pr (cost=0.00..3658.64 rows=179564 width=12) (actual time=0.753..557.171 rows=179574 loops=1) Total runtime: 74014.520 ms Any ideas why it is so slow/how to make it faster?

    Read the article

  • IS operator behaving a bit strangely

    - by flockofcode
    1) According to my book, IS operator can check whether expression E (E is type) can be converted to the target type only if E is either a reference conversion, boxing or unboxing. Since in the following example IS doesn’t check for either of the three types of conversion, the code shouldn’t work, but it does: int i=100; if (i is long) //returns true, indicating that conversion is possible l = i; 2) a) B b; A a = new A(); if (a is B) b = (B)a; int i = b.l; class A { public int l = 100; } class B:A { } The above code always causes compile time error “Use of unassigned variable”. If condition a is B evaluates to false, then b won’t be assigned a value, but if condition is true, then it will. And thus by allowing such a code compiler would have no way of knowing whether the usage of b in code following the if statement is valid or not ( due to not knowing whether a is b evaluates to true or false) , but why should it know that? Intsead why couldn’t runtime handle this? b) But if instead we’re dealing with non reference types, then compiler doesn’t complain, even though the code is identical.Why? int i = 100; long l; if (i is long) l = i; thank you

    Read the article

  • Is there a way in VS2008 (c#) to see all the possible exception types that can originate from a meth

    - by Matt
    Is there a way in VS2008 IDE for c# to see all the possible exception types that can possibly originate from a method call or even for an entire try-catch block? I know that intellisense or the object browser tells me this method can throw these types of exceptions but is there another way than using the object browser everytime? Something more accessible when coding? Furthermore, I don't think intellisense or the object browser do anything more than read the XML code comments. Shouldn't it be possible to scan a class's source and find all the exception types that can be thrown. (Forget path-ing based on method input, just scan the code for exception types) Am I wrong? Extending this idea, you should be able to hover over the 'try' or 'catch' keywords and present a tooltip with all the types of exceptions that can be thrown. My question boils down to, does a VS2008 add on like this exist? Does VS2010 do this perhaps? If not, could you implement it the way I've described, by scanning the class code for thrown exception types and would people find it useful. Exceptions bubble up so you have to scan every bit of code every method call, which I guess could be impractical, though I suppose you could build an index the first time and increase your speed that way. (It might be a cool little project....)

    Read the article

  • Speeding up a group by date query on a big table in postgres

    - by zaius
    I've got a table with around 20 million rows. For arguments sake, lets say there are two columns in the table - an id and a timestamp. I'm trying to get a count of the number of items per day. Here's what I have at the moment. SELECT DATE(timestamp) AS day, COUNT(*) FROM actions WHERE DATE(timestamp) >= '20100101' AND DATE(timestamp) < '20110101' GROUP BY day; Without any indices, this takes about a 30s to run on my machine. Here's the explain analyze output: GroupAggregate (cost=675462.78..676813.42 rows=46532 width=8) (actual time=24467.404..32417.643 rows=346 loops=1) -> Sort (cost=675462.78..675680.34 rows=87021 width=8) (actual time=24466.730..29071.438 rows=17321121 loops=1) Sort Key: (date("timestamp")) Sort Method: external merge Disk: 372496kB -> Seq Scan on actions (cost=0.00..667133.11 rows=87021 width=8) (actual time=1.981..12368.186 rows=17321121 loops=1) Filter: ((date("timestamp") >= '2010-01-01'::date) AND (date("timestamp") < '2011-01-01'::date)) Total runtime: 32447.762 ms Since I'm seeing a sequential scan, I tried to index on the date aggregate CREATE INDEX ON actions (DATE(timestamp)); Which cuts the speed by about 50%. HashAggregate (cost=796710.64..796716.19 rows=370 width=8) (actual time=17038.503..17038.590 rows=346 loops=1) -> Seq Scan on actions (cost=0.00..710202.27 rows=17301674 width=8) (actual time=1.745..12080.877 rows=17321121 loops=1) Filter: ((date("timestamp") >= '2010-01-01'::date) AND (date("timestamp") < '2011-01-01'::date)) Total runtime: 17038.663 ms I'm new to this whole query-optimization business, and I have no idea what to do next. Any clues how I could get this query running faster?

    Read the article

  • multiple mysql_real_query() in while loop

    - by Steve
    It seems that when I have one mysql_real_query() function in a continuous while loop, the query will get executed OK. However, if multiple mysql_real_query() are inside the while loop, one right after the other. Depending on the query, sometimes neither the first query nor second query will execute properly. This seems like a threading issue to me. I'm wondering if the mysql c api has a way of dealing with this? Does anyone know how to deal with this? mysql_free_result() doesn't work since I am not even storing the results. //keep polling as long as stop character '-' is not read while(szRxChar != '-') { // Check if a read is outstanding if (HasOverlappedIoCompleted(&ovRead)) { // Issue a serial port read if (!ReadFile(hSerial,&szRxChar,1, &dwBytesRead,&ovRead)) { DWORD dwErr = GetLastError(); if (dwErr!=ERROR_IO_PENDING) return dwErr; } } // Wait 5 seconds for serial input if (!(HasOverlappedIoCompleted(&ovRead))) { WaitForSingleObject(hReadEvent,RESET_TIME); } // Check if serial input has arrived if (GetOverlappedResult(hSerial,&ovRead, &dwBytesRead,FALSE)) { // Wait for the write GetOverlappedResult(hSerial,&ovWrite, &dwBytesWritten,TRUE); //load tagBuffer with byte stream tagBuffer[i] = szRxChar; i++; tagBuffer[i] = 0; //char arrays are \0 terminated //run query with tagBuffer if( strlen(tagBuffer)==PACKET_LENGTH ) { sprintf(query,"insert into scan (rfidnum) values ('"); strcat(query, tagBuffer); strcat(query, "')"); mysql_real_query(&mysql,query,(unsigned int)strlen(query)); i=0; } mysql_real_query(&mysql,"insert into scan (rfidnum) values ('2nd query')",(unsigned int)strlen("insert into scan (rfid) values ('2nd query')")); mysql_free_result(res); } }

    Read the article

  • Pass NSURL from One Class To Another

    - by user717452
    In my appDelegate in didFinishLaunchingWithOptions, I have the following: NSURL *url = [NSURL URLWithString:@"http://www.thejenkinsinstitute.com/Journal/"]; NSString *content = [NSString stringWithContentsOfURL:url]; NSString * aString = content; NSMutableArray *substrings = [NSMutableArray new]; NSScanner *scanner = [NSScanner scannerWithString:aString]; [scanner scanUpToString:@"<p>To Download the PDF, " intoString:nil]; // Scan all characters before # while(![scanner isAtEnd]) { NSString *substring = nil; [scanner scanString:@"<p>To Download the PDF, <a href=\"" intoString:nil]; // Scan the # character if([scanner scanUpToString:@"\"" intoString:&substring]) { // If the space immediately followed the #, this will be skipped [substrings addObject:substring]; } [scanner scanUpToString:@"" intoString:nil]; // Scan all characters before next # } // do something with substrings NSString *URLstring = [substrings objectAtIndex:0]; self.theheurl = [NSURL URLWithString:URLstring]; NSLog(@"%@", theheurl); [substrings release]; The console printout for theheurl gives me a valid URL ending in .pdf. In the class I would like to load the URL, I have the following: - (void)viewWillAppear:(BOOL)animated { _appdelegate.theheurl = currentURL; NSLog(@"%@", currentURL); NSLog(@"%@", _appdelegate.theheurl); [worship loadRequest:[NSURLRequest requestWithURL:currentURL cachePolicy:NSURLRequestReloadIgnoringLocalCacheData timeoutInterval:60.0]]; timer = [NSTimer scheduledTimerWithTimeInterval:(1.0/2.0) target:self selector:@selector(tick) userInfo:nil repeats:YES]; [super viewWillAppear:YES]; } However, both NSLogs in that class come back null. What am I Doing wrong in getting the NSURL from the AppDelegate to the class to load it?

    Read the article

  • INS-40719 error when Install Oracle RAC?

    - by Data-Base
    I'm tying to (learn how to) install Oracle RAC 11g on CentOS 6 all went OK so far but I get INS-40719 Error message regarding SCAN Name I do not have DNS server and I'm not going to try to use it on this setup I add this line to /etc/hosts 192.168.244.100 rac-cluster then used "rac-cluster" as the SCAN name and it's still not working with the same error message! any one can guide me on how to make it work? 1- do I have to add "192.168.244.100 rac-cluster" to /etc/hosts on both nodes? 2- do I need to edit/add any thing else on the nodes? cheers

    Read the article

  • Intel RST crashing! IAStorUI unstable in win 8.1

    - by user269549
    I have a Z87E-ITX Asrock mobo. Using a new 64G Corsair SSD as cache and a new Western Digi Black Scorpion 750G 2.5" drive. Using windows 8.1. Latest software or not, I can't open Intel RST software without it saying IAStorUI has stopped working. A problem caused the program to stop working correctly blah blah. I have had some issues recently with Robotic sounds causing fps drops etc but found it was the HDD. After a standard windows scan and fix (& update to win 8.1) I haven't been able to open Intel RST. I thought maybe I should try to look for an SSD cache drive checker but as it doesn't show up as a drive, I'm unsure how any program can scan it.

    Read the article

  • Canon LiDE 600F FAU on Snow Leopard?

    - by jdmuys
    Hello, I have been able to use my Canon LiDE 600F scanner under Snow Leopard to scan paper sheet, after installing Canon's latest driver software. However, I cannot find a way to make the FAU (Film Adaptor Unit) to work: Canon's software want to calibrate it first and gives an error message "Calibration cannot be performed. Pull out the film. 182.0.0". (of course there is no film). Hamrick's VueScan doesn't seem to support the FAU Apple's Image Capture doesn't propose a film option either Did I miss something? Did somebody manage to scan film (positive or negative) using the LiDE 600F under Snow Leopard? Many thanks

    Read the article

  • Why do I get error 0x0070004 when trying to update to Windows 8.1 from Windows 8?

    - by Jeffrey Lin
    So, I'm trying to update Windows 8 to Windows 8.1 via the Windows Store, but every time I attempt to, the update downloads properly, but then I get the error: Windows 8.1 This app wasn't installed - view details When I click on it, it says: Something happened and the Windows 8.1 could not be installed. Please try again. Error code: 0x80070004 Try again Cancel Install What does this mean? A quick Google search yields nothing. I have tried rebooting, clearing the store cache, and resetting Windows Update. A quick chkdsk scan shows no errors. A SFC scan shows that there are many issues. http://pastebin.com/TZiH8ZXZ Could this be the issue? I found the error log! http://pastebin.com/BXZEsejm Why is the registry corrupt?

    Read the article

  • mac osX file recovery

    - by Daniel
    I thought that all operating systems would merge folder content when being moved to the same location. Imagine my surprise when that didn't happen and I have hundreds, if not thousands of files that have gone missing and are nowhere to be found. Because they were not "deleted" they are not in the trash bin. I've tried to do some recovery using a program called stellarPheonix but after about a 24hour scan, it didn't recognize any of the raw files (.dng,.arw) as image files and so I couldn't see if they could be recovered. It also didn't show the directory structure, which would be handy. I tried a quick scan, but all it showed was files that were still on the HD, not sure what the point of that is. I've used recover 2000 on Win and it does a good job, does anyone know of anything that works quickly and reliably for this kind of file recovery. (I don't think I should have to do a sector-by=sector for this kind of file loss)

    Read the article

  • Remove CGI from IIS7

    - by jekcom
    I ran some security scan and the scan said that all kind of CGI stuff are potential thread. This is part of the result : (ash) is present in the cgi-bin directory (bash) is present in the cgi-bin directory By exploiting this vulnerability, a malicious user may be able to execute arbitrary commands on a remote system. In some cases, the hacker may be able to gain root level access to the system, in which case the hacker might be able to cause copious damage to the system, or use the system as a jumping off point to target other systems on the network for intrusion and/or denial of service attacks. and many more related to cgi-bin directory. First I searched all the server for cgi-bin folder and it did not find any. Second I'm running my website on pure .NET and I don't use any scripts like php. Question is how can I remove this CGI thing from the IIS?

    Read the article

  • Fixing corrupt AVG vault? All files in USB drive are locked out.

    - by Kelsey
    I was doing a virus scan on an external USB drive and while AVG was scanning my system got locked up and required a reboot. Since that time all data on the external drive is no longer accessible. I can see all the files in the root and directorys but I cannot browse into any of them as Windows 7 gives an error stating they are corrupt. If I show hidden files there is a hidden AVG directory that I know was not there to begin with and I am assuming it is some type of vault to protect files while being scanned. Well not the entire drives contents are unaccessible because I think whatever does the managing of the scan failed during the roobt and left the headers or something in a corrupt state. Does anyone know how to 'unlock' or recover this data? Luckily I can recover this data from other sources as a last resort but I would like to fix this if possible. Any help would be appreciated. Thanks.

    Read the article

  • Alignment requirements: converting basic disk to dynamic disk in order to set up software RAID?

    - by 0xC0000022L
    On Windows 7 x64 Professional I am struggling to convert a basic disk to a dynamic one. Under Disk Management in the MMC the conversion is supposed to be initiated automatically, but it doesn't. My guess: because of using third-party partitioning tools there isn't enough space in front and after the partitions (system-reserved/boot + system volume) to store the required meta-data. When demoting a dynamic disk to a basic disk manually, I noticed that some space seems to be required before and after the partitions. What are the exact alignment requirements that allow the on-board tools in Windows to do the conversion?

    Read the article

< Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >