Search Results

Search found 18916 results on 757 pages for 'sharing nothing architect'.

Page 488/757 | < Previous Page | 484 485 486 487 488 489 490 491 492 493 494 495  | Next Page >

  • Where to get 1440 free cron jobs a day?

    - by Nok Imchen
    I'm making a program for my own use. In this program, I need to set up cron job. The cron job should run every minute (24 hr * 60 mins = 1440 times). Thus, I'll need to set up a cron job with a frequency of 1 minute. I think Google App Engine gives free cron job. But I'm very new to it. I downloaded the java SDK and read the document but understood nothing :( So, I can't use Google App Engine. Is here any other free service like Google app engine which but with easier inferface??? All I want is a cron job with 1 minute frequency Please help/suggest me... Thank you

    Read the article

  • ASP.NET and IsNew on the page level

    - by tyndall
    Never seen this before in ASP.NET development. I'm trying to refactor out 40 single-page ASP.NET pages to code-behind style. What does this code do? // Validate required parameters (if "new", then nothing is required) if (!this.IsNew()) { if (string.IsNullOrEmpty(_billId)) { responseErrorNo = 4; Utils.SendError(respErrNum); } } Its on a single-page design ASP.NET page in the block in the Page_Load method. On a code-behind page this code ( .IsNew) is not recognized. What am I missing here? Is there an MSDN page on IsNew of the "page"?

    Read the article

  • Spring security and Struts 2

    - by Thanksforfish
    I have a struts2 action with an @Secured({"ROLE_ADMIN"}) to secure the execute method. In the execute method i assign a message to a member variable of the action, then return SUCCESS and end up on the jsp page. On the page I retrieve the actions member variable with <sroperty. private String greeting; public String execute() throws Exception { this.greeting="Hello"; return SUCCESS; } // getters and setters for greeting ... <s:property value="greeting" /> The problem is when the secured annotation is present the jsp shows nothing for the member variable but when @Secured is removed the whole thing behaves properly and shows the message that was set into the member variable. It appears that the actual security is working ok but when enabled via the annotation the member variable (or maybe the instance of the action) is not making its way onto the value stack. I cant see any error messages.

    Read the article

  • NSDictionary causing EXC_BAD_ACCESS

    - by aks
    I am trying to call objectForKey: on an nsdictionary ivar, but I get an EXC_BAD_ACCESS error. The nsdictionary is created using the JSON-framework and then retained. The first time I use it (just after I create it, same run loop) it works perfectly fine, but when I try to access it later nothing works. I am doing this code to try to figure out what is wrong: if (resultsDic == nil) { NSLog(@"results dic is nil."); } if ( [resultsDic respondsToSelector:@selector(objectForKey:)] ) { NSLog(@"resultsDic should respond to objectForKey:"); } The dictionary is never nil, but it always crashes on respondsToSelector. any ideas?

    Read the article

  • paypal IPN php script

    - by pol
    I just realized that my paypal ipn handler in php doesn't work anymore (and I change nothing), I use the sample php script provided by paypal. I tried to isolate the problem by making several test, and at this time I can say that the problem is not the "VERIFIED" or the "INVALID" thing but comes from these lines: [...] fputs ($fp, $header . $req); while (!feof($fp)) { $res = fgets ($fp, 1024); if (strcmp ($res, "VERIFIED") == 0) { // check the payment_status is Completed [...] Anyone know if paypal change something or why it doesn't work? thanks' ps: if I put my code before all the paypal tests (before the line "if (!$fp)") it just works fine

    Read the article

  • Deploying with Capistrano & Subversion. Working copy locked

    - by Rimian
    I'm deploying to a Debian server with Capistrano which fails due to locked a working copy. I narrowed it down to this: svn checkout http://myrepo.net/mysite/tags/1.0 /var/www/mysite/releases/1234 So if I run: cap invoke COMMAND='svn checkout http://myrepo.net/mysite/tags/1.0 /var/www/mysite/releases/1234' I get an error: svn: Working copy '/var/www/mysite/releases/1' locked Clean up makes no difference. The same command runs fine from the server. When I list the files in 1234/ I can see all the .svn and working copy files. Can someone please point me in the right direction to resolve this? How do I tell if the working copy is really locked? svn status shows nothing...

    Read the article

  • can't click on input element behind div element

    - by Preston
    On my site: http://alsite.com.br/solalev/ I have some elements on the bottom of page that I can't click through. Above the elements is a div called push.. I use this div to make the footer always stay on the bottom of my page even when the content is smaller... (I dont know if I do this right.. but it has worked).. So.. on Chrome and Firefox I can't click.. but on IE this works.... I use this: .push{ pointer-events: none; } but nothing happens...

    Read the article

  • WPF MenuItem Content "Name"...

    - by Kyle
    Hi, I have a LOT of MenuItem(s), and I want to be able to change their "Content" so that it displays in the program. When I load up the program, their "Content Name" is set in a Setter I created.. but the only problem is that I have almost a hundred MenuItem objects, and I need their display names in the program to be different (not the setter's default). I could just create over 100 different "Setter"'s and change one line in them.. but that is very time consuming. Is there a simpler approach? I want to be able to do this in the XAML where I am declaring them. Is there a way to do this? I've been searching and trying different attempts, but nothing so far.. perhaps someone knows? Thanks

    Read the article

  • Python: Importing a file from a parent folder

    - by Sascha
    ...Now I know this question has been asked many times & I have looked at these other threads. Nothing so far has worked, from using sys.path.append('.') to just import foo I have a python file that wishes to import a file (that is in its parent directory). Can you help me figure out how my child file can successfully import its a file in its parent directory. I am using python 2.7 The structure is like so (each directory also has the __init__.py file in it): StockTracker/ __Comp/ ____a.py ____SubComp/ ______b.py Inside b.py, I would like to import a.py: So I have tried each of the following but I still get an error inside b.py saying "There is no such module a" import a import .a import Comp.a import StockTracker.Comp.a import os import sys sys.path.append('.') import a sys.path.remove('.')

    Read the article

  • SQL Server Express 2008 Stored Procedure execution time spikes periodically

    - by user156241
    I have a big stored procedure on a SQL Server 2008 Express SP2 database that gets run about every 200 ms. Normal execution time is about 50ms. What I am seeing is large inconsistencies in this run time. It will execute for while, say 50-100 times at 40-60ms which is expected, then seemingly at random the same stored procedure will take way longer, say 900ms or 1.5 seconds to run. Sometimes more than one call of the same procedure in a row will take longer too. It appears that something is causing sql server to slow down dramatically every minute or so, but I can't figure out what. There is no timing pattern between the occurences. I have the same setup on two different computers, one of which is a clean XP Pro load with no virus checking and nothing installed except SQL server. Also, The recovery options for all the databases are set to "Simple".

    Read the article

  • Delphi - online technical tests

    - by RBA
    I have searched for some online Delphi programming tests, and except the small test for Delphi certification and several tests on Delphi.about.com I did find nothing. Any ideas where I can find some Delphi online tests? LE: defining online Delphi programming tests: - technical questions about Delphi fundamentals,Data types,classes, libraries, generics, database concepts, etc. Examples here (Delphi Developer Certification Exam Study Guide) and here. LE2: tests to take after you have read all the articles from this question: Questions every good Delphi developer should be able to answer?

    Read the article

  • Post Loading ads from Google Admanager

    - by Prem
    I have changed code around to basically load an add the bottom of the page in a hidden div and attached an onload event handler that called document.getElementById(xxx).appendChild() to take the hidden ad and move it into the right spot in my page. This works GREAT.. however when the ad is a text ad it AFTER i move the ad there is nothing in the rendered Iframe. I did tests to see what it looks like before i move it and sure enough the text links load in the IFRAME but the second i do the appendChild call to move the div that contains the ad i seem to loose the contents of the Iframe. Any ideas whats going on <div id="myad" style="display: none;"> GA_googleFillSlot("MyADSlotName"); </div> <script> window.onload = function() { // leader board document.getElementById('adplaceholder').appendChild(document.getElementById('myAd')); document.getElementById('myAd').style.display = ''; </script

    Read the article

  • How can I generate a list of words made up of combinations of three word lists in Perl?

    - by Chris Denman
    I have three lists of words. I would like to generate a single list of all the combinations of words from the three lists. List 1: red green blue List 2: one two List 3: apple banana The final list would like like so: red one apple red two apple red one banana red two banana ... and so on Ideally I'd like to pass in three arrays and the routine return one array. I have done a simple loop like so: foreach $word1 (@list1){ foreach $word2 (@list2){ foreach $word3 (@list3){ print "$word1 $word2 $word3\n"; } } } However, this doesn't work if there's nothing in the second or third list (I may only want to iterate between one, two or three lists at a time - in other words, if I only supply two lists it should iterate between those two lists).

    Read the article

  • Posting to tumblr in PHP

    - by Sherif
    I am trying to make a test post to my tumblr blog with a php script (that will eventually be ran as a cron job) I have been browsing google and read many answers on here and the closest I found is explained in this post: Tumblr OAuth using PHP's OAuth class I am using the code in his tutorial here: http://vigrond.com/blog/2012/02/04/oauth-extension-php-and-the-tumblr-api/ Pretty much as is, replacing the blogurl with mine and of course the consumer and secret key. When I run the .php script via my browser however, nothing happens. Any ideas? EDIT: The only error I found in cPanel's error log is this: [Wed Oct 31 00:29:25 2012] [error] [client xx.xx.xx.xx ] PHP Warning: session_start() [function.session-start]: Cannot send session cache limiter - headers already sent (output started at /path on line 14 But I fixed this and the error does not appear anymore.

    Read the article

  • xslt dropdown problems

    - by Primetime
    I have a page with two drop down boxes created in a xslt. I am using onchange in both blocks to call a javascript function. If both drop downs are loaded on the page only the first drop down has a working onchange. I checked the view source code and it shows both onchange events. Does anyone have any idea how to fix this? ex: two drop downs, select first one, it refreshes, select second it does nothing, select first one and it still refreshes. code removed.

    Read the article

  • Removing the email validation requirement - Login Toboggan

    - by Rob Orr
    I'm building a premium membership site where a visitor can purchase a role and gain access to the privileged content using ubercart. I've got all that working fine, but the last tiny snag that my client wants to remove is to remove the validation email requirement that's fired when someone signs up on the site in Login Toboggan (6.1.9). I've got nothing set that is forcing this extra step and I've come to believe that this may be a feature in Drupal (acquia distro 6.22) core for any user that registers. I was hoping that this module (login toboggan) would eliminate that step but I've not as of yet been able to do so. I can allow the newly registered user access by setting that in the module, but the notification and validation email requirement still remains. Can anyone recommend a way around this? I just want them to be able to come to the site purchase their membership without any validation/confirmation email. Is this possible? Thanks - Rob

    Read the article

  • LDAP in medium trust

    - by eych
    I've have a solution with one website and several projects. The projects all have the AllowPartiallyTrustedCallers attribute and are strongly-named. The site works in full trust. However, after set the trust to medium, I get the System.Security.SecurityException: Request failed. error as soon as I browse to the site. In my projects, I have calls to LogOnUser, as well as many calls to variousSystem.DirectoryServices.AccountManagement methods. Can this site run with medium trust or do I have to have full trust for all the LDAP calls? As I mentioned, I've set the AllowPartiallyTrustedCallers attribute on all projects. Not sure what else to do. Also, I have no idea what/where the error is being generated. The event logs on the server have nothing in regards to this SecurityException. Is there any way to find out what the error location is so maybe I can attempt to rewrite some code? [running .NET 4.0 on Win2k8R2]

    Read the article

  • Content pulled in via jQuery AJAX is not associating with previous functions

    - by Jarsen
    I'm using the jquery .load() function to pull in data to populate a div with. The content I'm pulling in has CSS selectors that should match up with jQuery click functions I've written. However, when I load in the data, although the correct CSS selectors are there, when I click on areas that should invoke a response from jQuery, nothing happens. Do I need to do some sort of reload? Here's the code I've got for the jQuery AJAX: $(document).ready(function() { // AJAX functionality for drupal nodes $(".ajax_node").click(function() { var ajax_src = $(this).attr("href"); // we're gonna load the href // empty target div and load in content // the space in the string before the #id is necessary $("#ajax_area").empty().load(ajax_src + " #ajax_wrapper"); return false; // stops the link from following through }); // General AJAX fucntionality $(".ajax").click(function() { var ajax_src = $(this).attr("href"); $("#ajax_area").empty().load(ajax_src); return false; }); });

    Read the article

  • JavaScript array to PHP array then process in PHP and return

    - by Constructor
    Example: javascript: var mycourses=new Array(); mycourses[0]="History"; mycourses[1]="Math"; mycourses[1][0]="Introduction to math"; mycourses[1][1]="Math 2"; mycourses[1][2]="Math 3"; PHP will then run these values through functions (please note values are mostly not strings as in the example above but rather numbers), the functions will return some text which will than be displayed in a form How should I go about doing this? p.s.: I found some similar stuff, but nothing quite like this... as far as I see I will have to use JSON (is there a way to code it from JS automatically - saw this for strings) and AJAX

    Read the article

  • Cannot save model due to bad transaction? Django

    - by Kenneth Love
    Trying to save a model in Django admin and I keep getting the error: Transaction managed block ended with pending COMMIT/ROLLBACK I tried restarting both the Django (1.2) and PostgreSQL (8.4) processes but nothing changed. I added "autocommit": True to my database settings but that didn't change anything either. Everything that Google has turned up has either not been answered or the answer involved not having records in the users table, which I definitely have. The model does not have a custom save method and there are no pre/post save signals tied to it. Any ideas or anything else I can provide to make answering this easier?

    Read the article

  • Will SerialPort DataRecieved Event Trigger Repeatedly..

    - by Gunner
    Suppose I read data from SerialPort whenever there is 100 or bytes available or else nothing is done. That means, the remaining data will still be available in the SerialPort Buffer. This reading is done inside the event handler of DataReceived. Now If a situation arises, when there there is, say 50 bytes in the SerilaPort Buffer and no more data comes. From what I understand, the DataReceived Event will be triggered when ever there is certain amount of bytes available for reading from the Buffer. Now, in the given scenario, If i never read those 50 bytes, will the Event get activated continuously due to the presence of these unread bytes?

    Read the article

  • Failed to convert parameter value from a Guid to a String

    - by user320460
    Hello, I am at the end of my knowledge and googled for the answer too but no luck :/ Week ago everything worked well. I did a revert on the repository, recreated the tableadapter etc... nothing helped. When I try to save in my application I get an SystemInvalidCastException at this point: PersonListDataSet.cs: partial class P_GroupTableAdapter { public int Update(PersonListDataSet.P_GroupDataTable dataTable, string userId) { this.Adapter.InsertCommand.Parameters["@userId"].Value = userId; this.Adapter.DeleteCommand.Parameters["@userId"].Value = userId; this.Adapter.UpdateCommand.Parameters["@userId"].Value = userId; return this.Update(dataTable); **<-- Exception occurs here** } } Everything is stuck here because a Guid - and I checked the datatable preview with the magnifier tool its really a true Guid in the column of the datatable - can not be converted to a string ??? How can that happen?

    Read the article

  • In WPF using a ListView is there an elegant way to auto resize items?

    - by Justin
    I recently wrote, my first, WPF application that has a list of items that are polled from a web-serivce. The items are displayed/data-bound in a ListView via a GridView. A background thread periodically polls the web-serivce and updates the list. If, say, I had three items initially bound to the ListView that simply display a description and the three descriptions where something like: - ProjectA - ProjectB - ProjectC Later a new item is added with a description of 'AReallyReallyLongProjectName', I would end up with a list like: - ProjectA - ProjectB - ProjectC - AReallyR The GridViewColumn would not update it's width and would subsequently cut off any new items that extended the original width. I added this bit of code which forces the column to resize, but it just seems a little hacky. (Just seems weird to set a width just to set it back to nothing to force the resize) if(gridView != null) { foreach(var column in gridView.Columns) { if (double.IsNaN(column.Width) column.Width = column.ActualWidth; column.Width = double.NaN; } } Is there a better, more elegant solution, to accomplish this same thing?

    Read the article

  • How to Load Oracle Tables From Hadoop Tutorial (Part 5 - Leveraging Parallelism in OSCH)

    - by Bob Hanckel
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Using OSCH: Beyond Hello World In the previous post we discussed a “Hello World” example for OSCH focusing on the mechanics of getting a toy end-to-end example working. In this post we are going to talk about how to make it work for big data loads. We will explain how to optimize an OSCH external table for load, paying particular attention to Oracle’s DOP (degree of parallelism), the number of external table location files we use, and the number of HDFS files that make up the payload. We will provide some rules that serve as best practices when using OSCH. The assumption is that you have read the previous post and have some end to end OSCH external tables working and now you want to ramp up the size of the loads. Using OSCH External Tables for Access and Loading OSCH external tables are no different from any other Oracle external tables.  They can be used to access HDFS content using Oracle SQL: SELECT * FROM my_hdfs_external_table; or use the same SQL access to load a table in Oracle. INSERT INTO my_oracle_table SELECT * FROM my_hdfs_external_table; To speed up the load time, you will want to control the degree of parallelism (i.e. DOP) and add two SQL hints. ALTER SESSION FORCE PARALLEL DML PARALLEL  8; ALTER SESSION FORCE PARALLEL QUERY PARALLEL 8; INSERT /*+ append pq_distribute(my_oracle_table, none) */ INTO my_oracle_table SELECT * FROM my_hdfs_external_table; There are various ways of either hinting at what level of DOP you want to use.  The ALTER SESSION statements above force the issue assuming you (the user of the session) are allowed to assert the DOP (more on that in the next section).  Alternatively you could embed additional parallel hints directly into the INSERT and SELECT clause respectively. /*+ parallel(my_oracle_table,8) *//*+ parallel(my_hdfs_external_table,8) */ Note that the "append" hint lets you load a target table by reserving space above a given "high watermark" in storage and uses Direct Path load.  In other doesn't try to fill blocks that are already allocated and partially filled. It uses unallocated blocks.  It is an optimized way of loading a table without incurring the typical resource overhead associated with run-of-the-mill inserts.  The "pq_distribute" hint in this context unifies the INSERT and SELECT operators to make data flow during a load more efficient. Finally your target Oracle table should be defined with "NOLOGGING" and "PARALLEL" attributes.   The combination of the "NOLOGGING" and use of the "append" hint disables REDO logging, and its overhead.  The "PARALLEL" clause tells Oracle to try to use parallel execution when operating on the target table. Determine Your DOP It might feel natural to build your datasets in Hadoop, then afterwards figure out how to tune the OSCH external table definition, but you should start backwards. You should focus on Oracle database, specifically the DOP you want to use when loading (or accessing) HDFS content using external tables. The DOP in Oracle controls how many PQ slaves are launched in parallel when executing an external table. Typically the DOP is something you want to Oracle to control transparently, but for loading content from Hadoop with OSCH, it's something that you will want to control. Oracle computes the maximum DOP that can be used by an Oracle user. The maximum value that can be assigned is an integer value typically equal to the number of CPUs on your Oracle instances, times the number of cores per CPU, times the number of Oracle instances. For example, suppose you have a RAC environment with 2 Oracle instances. And suppose that each system has 2 CPUs with 32 cores. The maximum DOP would be 128 (i.e. 2*2*32). In point of fact if you are running on a production system, the maximum DOP you are allowed to use will be restricted by the Oracle DBA. This is because using a system maximum DOP can subsume all system resources on Oracle and starve anything else that is executing. Obviously on a production system where resources need to be shared 24x7, this can’t be allowed to happen. The use cases for being able to run OSCH with a maximum DOP are when you have exclusive access to all the resources on an Oracle system. This can be in situations when your are first seeding tables in a new Oracle database, or there is a time where normal activity in the production database can be safely taken off-line for a few hours to free up resources for a big incremental load. Using OSCH on high end machines (specifically Oracle Exadata and Oracle BDA cabled with Infiniband), this mode of operation can load up to 15TB per hour. The bottom line is that you should first figure out what DOP you will be allowed to run with by talking to the DBAs who manage the production system. You then use that number to derive the number of location files, and (optionally) the number of HDFS data files that you want to generate, assuming that is flexible. Rule 1: Find out the maximum DOP you will be allowed to use with OSCH on the target Oracle system Determining the Number of Location Files Let’s assume that the DBA told you that your maximum DOP was 8. You want the number of location files in your external table to be big enough to utilize all 8 PQ slaves, and you want them to represent equally balanced workloads. Remember location files in OSCH are metadata lists of HDFS files and are created using OSCH’s External Table tool. They also represent the workload size given to an individual Oracle PQ slave (i.e. a PQ slave is given one location file to process at a time, and only it will process the contents of the location file.) Rule 2: The size of the workload of a single location file (and the PQ slave that processes it) is the sum of the content size of the HDFS files it lists For example, if a location file lists 5 HDFS files which are each 100GB in size, the workload size for that location file is 500GB. The number of location files that you generate is something you control by providing a number as input to OSCH’s External Table tool. Rule 3: The number of location files chosen should be a small multiple of the DOP Each location file represents one workload for one PQ slave. So the goal is to keep all slaves busy and try to give them equivalent workloads. Obviously if you run with a DOP of 8 but have 5 location files, only five PQ slaves will have something to do and the other three will have nothing to do and will quietly exit. If you run with 9 location files, then the PQ slaves will pick up the first 8 location files, and assuming they have equal work loads, will finish up about the same time. But the first PQ slave to finish its job will then be rescheduled to process the ninth location file, potentially doubling the end to end processing time. So for this DOP using 8, 16, or 32 location files would be a good idea. Determining the Number of HDFS Files Let’s start with the next rule and then explain it: Rule 4: The number of HDFS files should try to be a multiple of the number of location files and try to be relatively the same size In our running example, the DOP is 8. This means that the number of location files should be a small multiple of 8. Remember that each location file represents a list of unique HDFS files to load, and that the sum of the files listed in each location file is a workload for one Oracle PQ slave. The OSCH External Table tool will look in an HDFS directory for a set of HDFS files to load.  It will generate N number of location files (where N is the value you gave to the tool). It will then try to divvy up the HDFS files and do its best to make sure the workload across location files is as balanced as possible. (The tool uses a greedy algorithm that grabs the biggest HDFS file and delegates it to a particular location file. It then looks for the next biggest file and puts in some other location file, and so on). The tools ability to balance is reduced if HDFS file sizes are grossly out of balance or are too few. For example suppose my DOP is 8 and the number of location files is 8. Suppose I have only 8 HDFS files, where one file is 900GB and the others are 100GB. When the tool tries to balance the load it will be forced to put the singleton 900GB into one location file, and put each of the 100GB files in the 7 remaining location files. The load balance skew is 9 to 1. One PQ slave will be working overtime, while the slacker PQ slaves are off enjoying happy hour. If however the total payload (1600 GB) were broken up into smaller HDFS files, the OSCH External Table tool would have an easier time generating a list where each workload for each location file is relatively the same.  Applying Rule 4 above to our DOP of 8, we could divide the workload into160 files that were approximately 10 GB in size.  For this scenario the OSCH External Table tool would populate each location file with 20 HDFS file references, and all location files would have similar workloads (approximately 200GB per location file.) As a rule, when the OSCH External Table tool has to deal with more and smaller files it will be able to create more balanced loads. How small should HDFS files get? Not so small that the HDFS open and close file overhead starts having a substantial impact. For our performance test system (Exadata/BDA with Infiniband), I compared three OSCH loads of 1 TiB. One load had 128 HDFS files living in 64 location files where each HDFS file was about 8GB. I then did the same load with 12800 files where each HDFS file was about 80MB size. The end to end load time was virtually the same. However when I got ridiculously small (i.e. 128000 files at about 8MB per file), it started to make an impact and slow down the load time. What happens if you break rules 3 or 4 above? Nothing draconian, everything will still function. You just won’t be taking full advantage of the generous DOP that was allocated to you by your friendly DBA. The key point of the rules articulated above is this: if you know that HDFS content is ultimately going to be loaded into Oracle using OSCH, it makes sense to chop them up into the right number of files roughly the same size, derived from the DOP that you expect to use for loading. Next Steps So far we have talked about OLH and OSCH as alternative models for loading. That’s not quite the whole story. They can be used together in a way that provides for more efficient OSCH loads and allows one to be more flexible about scheduling on a Hadoop cluster and an Oracle Database to perform load operations. The next lesson will talk about Oracle Data Pump files generated by OLH, and loaded using OSCH. It will also outline the pros and cons of using various load methods.  This will be followed up with a final tutorial lesson focusing on how to optimize OLH and OSCH for use on Oracle's engineered systems: specifically Exadata and the BDA. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • NSString stringWithContentsOfFile failing with what seems to be the wrong error code

    - by deanWombourne
    Hello. I'm trying to load a file into a string. Here is the code I'm using: NSError *error = nil; NSString *fullPath = [[NSBundle mainBundle] pathForResource:filename ofType:@"html"]; NSString *text = [NSString stringWithContentsOfFile:fullPath encoding:NSUTF8StringEncoding error:&error]; When passed in @"about" as the filename, it works absolutely fine, showing the code works. When passed in @"eula" as the filename, it fails with 'Cocoa error 258', which translates to NSFileReadInvalidFileNameError. However, if I swap the contents of the files over but keep the names the same, the other file fails proving there is nothing wrong with the filename, it's something to do with the content. The about file is fairly simple HTML but the eula file is a massive mess exported from Word by the legal department. Does anyone know of anything inside a HTML file that could cause this error to be raised? Much thanks, Sam

    Read the article

< Previous Page | 484 485 486 487 488 489 490 491 492 493 494 495  | Next Page >