Search Results

Search found 49518 results on 1981 pages for 'configuration files'.

Page 994/1981 | < Previous Page | 990 991 992 993 994 995 996 997 998 999 1000 1001  | Next Page >

  • How to write a flexible modular program with good interaction possibilities between modules?

    - by PeterK
    I went through answers on similar topics here on SO but could't find a satisfying answer. Since i know this is a rather large topic, i will try to be more specific. I want to write a program which processes files. The processing is nontrivial, so the best way is to split different phases into standalone modules which then would be used as necessary (since sometimes i will be only interested in the output of module A, sometimes i would need output of five other modules, etc). The thing is, that i need the modules to cooperate, because the output of one might be the input of another. And i need it to be FAST. Moreover i want to avoid doing certain processing more than once (if module A creates some data which then need to be processed by module B and C, i don't want to run module A twice to create the input for modules B,C ). The information the modules need to share would mostly be blocks of binary data and/or offsets into the processed files. The task of the main program would be quite simple - just parse arguments, run required modules (and perhaps give some output, or should this be the task of the modules?). I don't need the modules to be loaded at runtime. It's perfectly fine to have libs with a .h file and recompile the program every time there is a new module or some module is updated. The idea of modules is here mainly because of code readability, maintaining and to be able to have more people working on different modules without the need to have some predefined interface or whatever (on the other hand, some "guidelines" on how to write the modules would be probably required, i know that). We can assume that the file processing is a read-only operation, the original file is not changed. Could someone point me in a good direction on how to do this in C++ ? Any advice is wellcome (links, tutorials, pdf books...).

    Read the article

  • How best to deal with warning c4305 when type could change?

    - by identitycrisisuk
    I'm using both Ogre and NxOgre, which both have a Real typedef that is either float or double depending on a compiler flag. This has resulted in most of our compiler warnings now being: warning C4305: 'argument' : truncation from 'double' to 'Ogre::Real' When initialising variables with 0.1 for example. Normally I would use 0.1f but then if you change the compiler flag to double precision then you would get the reverse warning. I guess it's probably best to pick one and stick with it but I'd like to write these in a way that would work for either configuration if possible. One fix would be to use #pragma warning (disable : 4305) in files where it occurs, I don't know if there are any other more complex problems that can be hidden by not having this warning. I understand I would push and pop these in header files too so that they don't end up spreading across code. Another is to create some macro based on the accuracy compiler flag like: #if OGRE_DOUBLE_PRECISION #define INIT_REAL(x) (x) #else #define INIT_REAL(x) static_cast<float>( x ) #endif which would require changing all the variable initialisation done so far but at least it would be future proof. Any preferences or something I haven't thought of?

    Read the article

  • help me reason about F# threads

    - by Kevin Cantu
    In goofing around with some F# (via MonoDevelop), I have written a routine which lists files in a directory with one thread: let rec loop (path:string) = Array.append ( path |> Directory.GetFiles ) ( path |> Directory.GetDirectories |> Array.map loop |> Array.concat ) And then an asynchronous version of it: let rec loopPar (path:string) = Array.append ( path |> Directory.GetFiles ) ( let paths = path |> Directory.GetDirectories if paths <> [||] then [| for p in paths -> async { return (loopPar p) } |] |> Async.Parallel |> Async.RunSynchronously |> Array.concat else [||] ) On small directories, the asynchronous version works fine. On bigger directories (e.g. many thousands of directories and files), the asynchronous version seems to hang. What am I missing? I know that creating thousands of threads is never going to be the most efficient solution -- I only have 8 CPUs -- but I am baffled that for larger directories the asynchronous function just doesn't respond (even after a half hour). It doesn't visibly fail, though, which baffles me. Is there a thread pool which is exhausted? How do these threads actually work?

    Read the article

  • php automatically commented with apache

    - by clement
    We have installed apache 2.2, and activeperl to run bugzilla, all that on a Windows Server 2003. Here We want to install PHP on the server to install a wiki. I followed those steps: tutorial to install PHP and enable it from Apache. After all those steps, I restart couples of times, and When I try a simple phpinfo() on PHP, the whole PHP code is commented: < ! - - ?php phpinfo(); ? - - Now, the httpd.conf was already edited for the PERL and it can be those edits that make the mistake. Here is the whole httpd.conf file: ServerRoot "C:/Program Files/Apache Software Foundation/Apache2.2" Listen 6969 LoadModule actions_module modules/mod_actions.so LoadModule alias_module modules/mod_alias.so LoadModule asis_module modules/mod_asis.so LoadModule auth_basic_module modules/mod_auth_basic.so LoadModule php5_module "c:/php/php5apache2_2.dll" LoadModule authn_default_module modules/mod_authn_default.so LoadModule authn_file_module modules/mod_authn_file.so LoadModule authz_default_module modules/mod_authz_default.so LoadModule authz_groupfile_module modules/mod_authz_groupfile.so LoadModule authz_host_module modules/mod_authz_host.so LoadModule authz_user_module modules/mod_authz_user.so LoadModule autoindex_module modules/mod_autoindex.so LoadModule cgi_module modules/mod_cgi.so LoadModule dir_module modules/mod_dir.so LoadModule env_module modules/mod_env.so LoadModule include_module modules/mod_include.so LoadModule isapi_module modules/mod_isapi.so LoadModule log_config_module modules/mod_log_config.so LoadModule mime_module modules/mod_mime.so LoadModule negotiation_module modules/mod_negotiation.so LoadModule setenvif_module modules/mod_setenvif.so User daemon Group daemon ServerAdmin [email protected] DocumentRoot C:/bugzilla-4.4.2/ Options FollowSymLinks AllowOverride None Order deny,allow Deny from all Options Indexes FollowSymLinks ExecCGI AllowOverride All Order allow,deny Allow from all ScriptInterpreterSource Registry-Strict DirectoryIndex index.html index.html.var index.cgi index.php Order allow,deny Deny from all Satisfy All ErrorLog "logs/error.log" LogLevel warn LogFormat "%h %l %u %t \"%r\" %s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %s %b" common <IfModule logio_module> LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %I %O" combinedio </IfModule> ScriptAlias /cgi-bin/ "C:/Program Files/Apache Software Foundation/Apache2.2/cgi-bin/" AllowOverride None Options None Order allow,deny Allow from all DefaultType text/plain AddType application/x-compress .Z AddType application/x-gzip .gz .tgz AddHandler cgi-script .cgi AddType application/x-httpd-php .php SSLRandomSeed startup builtin SSLRandomSeed connect builtin PHPIniDir "c:/php"

    Read the article

  • Outlook Interop: Password protected PST file headache

    - by Ed Manet
    Okay, I have no problem identifying the .PST file using the Outlook Interop assemblies in a C# app. But as soon as I hit a password protected file, I am prompted for a password. We are in the process of disabling the use of PSTs in our organization and one of the steps is to unload the PST files from the users' Outlook profile. I need to have this app run silently and not prompt the user. Any ideas? Is there a way to create the Outlook.Application object with no UI and then just try to catch an Exception on password protected files? // create the app and namespace Application olApp = new Application(); NameSpace olMAPI = olApp.GetNamespace("MAPI"); // get the storeID of the default inbox string rootStoreID = olMAPI.GetDefaultFolder(OlDefaultFolders.olFolderInbox).StoreID; // loop thru each of the folders foreach (MAPIFolder fo in olMAPI.Folders) { // compare the first 75 chars of the storeid // to prevent removing the Inbox folder. string s1 = rootStoreID.Substring(1, 75); string s2 = fo.StoreID.Substring(1, 75); if (s1 != s2) { // unload the folder olMAPI.RemoveStore(fo); } } olApp.Quit();

    Read the article

  • Issue reading in a cell from Excel with Apache POI

    - by Nick
    I am trying to use Apache POI to read in old (pre-2007 and XLS) Excel files. My program goes to the end of the rows and iterates back up until it finds something that's not either null or empty. Then it iterates back up a few times and grabs those cells. This program works just fine reading in XLSX and XLS files made in Office 2010. I get the following error message: Exception in thread "main" java.lang.NumberFormatException: empty String at sun.misc.FloatingDecimal.readJavaFormatString(Unknown Source) at java.lang.Double.parseDouble(Unknown Source) at the line: num = Double.parseDouble(str); from the code: str = cell.toString(); if (str != "" || str != null) { System.out.println("Cell is a string"); num = Double.parseDouble(str); } else { System.out.println("Cell is numeric."); num = cell.getNumericCellValue(); } where the cell is the last cell in the document that's not empty or null. When I try to print the first cell that's not empty or null, it prints nothing, so I think I'm not accessing it correctly.

    Read the article

  • Chrome targeted CSS

    - by Chris
    I have some CSS code that hides the cursor on a web page (it is a client facing static screen with no interaction). The code I use to do this is below: *, html { cursor: url('/web/resources/graphics/blank.cur'), pointer; } Blank.cur is a totally blank cursor file. This code works perfectly well in all browsers when I host the web files on my local server but when I upload to a Windows CE webserver (our production unit) the cursor represents itself as a black box. Odd. After some testing it seems that chrome only has a problem with totally blank cursor files when served from WinCE web server, so I created a blank cursor with one pixel as white, specifically for chrome. How do I then target this CSS rule to chrome specifically? i.e. *, html { cursor: url('/web/resources/graphics/blank.cur'), pointer; } <!--[if CHROME]> *, html { cursor: url('/web/resources/graphics/blankChrome.cur'), pointer; } <![endif]-->

    Read the article

  • Starting Beyond Compare from the Command Line

    - by Logan
    I have Beyond Compare 3 installed at; "C:\Program Files\Beyond Compare 3\BCompare.exe" and Cygwin; "C:\Cygwin\bin\bash.exe" What I would like is to be able to use a command such as; diff <file1> <file2> into the Cygwin shell and to have the shell fork a process opening the two files in beyond compare. I looked at the Beyond Compare Support Page but I'm afraid It was too brief for me. I tried copying the text verbatim (apart from path to executable) to no avail; Instead of using a batch file, create a file named "bc.sh" with the following line: "$(cygpath 'C:\Progra~1\Beyond~1\bcomp.exe')" `cygpath -w "$6"` `cygpath -w "$7"` /title1="$3" /title2="$5" /readonly Was I supposed to replace cygpath? I get a 'Command not found' error when I enter the name of the script on the command line. gavina@whwgavina1 /cygdrive $ "C:\Documents and Settings\gavina\Desktop\bc.sh" bash: C:\Documents and Settings\gavina\Desktop\bc.sh: command not found Does anyone have Beyond Compare working as I have described? Is this even possible in a Windows environment? Thanks in advance!

    Read the article

  • Why is PHP discriminating between .php and .abc extensions for caching?

    - by Sam
    There seems to be a problem between how PHP engine handles identical files that differ only in their file extension. Problem: "An If-Modified-Since conditional request returned the full content unchanged." Also, I measured that the .php extension loads much faster than identitcal twin with .xxx extension even though the file contents are identical, and they differ only in their file extension. "HTTP allows clients to make conditional requests to see if a copy that they hold is still valid. Since this response has a Last-Modified header, clients should be able to use an If-Modified-Since request header for validation. RED has done this and found that the resource sends a full response even though it hadn't changed, indicating that it doesn't support Last-Modified validation." homepage ending with .php exact same file, but ending .ast Given: A home.php file is copied as home.xxx and this extension is added to htaccess to recognize it as a PHP file. The .php file listen to the php.ini where freshness is set to 3 hrs, the non .php files have to listen to htaccess where freshness is set to 2 hrs according to: AddType application/x-httpd-php .php .ast .abc .xxx .etc <IfModule mod_headers.c> ExpiresActive On ExpiresDefault M2419200 Header unset ETag FileETag None Header unset Pragma Header set Cache-Control "max-age=2419200" ##### DYNAMIC PAGES <FilesMatch "\\.(ast|php|abc|xxx)$"> ExpiresDefault M7200 Header set Cache-Control "public, max-age=7200" </FilesMatch> </IfModule> So far so good and everything loads, except, the non-php file doesn't cache properly, or it does cache well but doesn't validate well, to be more specific. See images enclosed. Only the non-php file extension causes the error and loads slower. The entire page.php loads faster as somehow all the elements in there then load properly from cache, while the page.abc has the full request returned while it ought to be cached, meaning the entire page is slower. Bottom line: What should be changed, in order eliminate the If-Modified-Since conditional request returning the full content unchanged?

    Read the article

  • Projects integration question

    - by qkrsppopcmpt
    Other team has one legacy system, which is data aggregators. It is implemented as web service using JAVA, SOAP,MTOM,Tomcat and Axis2. They have wsdl files defining functionalities such as search, retrieve data, upload, download. Our team has a new developed website which is developed using RoR with mySQL. It is sort of social networking. Users can register, add friends, upload images, videos. Also, they can search data. We are required to connect the two systems. One possible solution I think is - Adding components into our website. The component invoke services on the aggregators. - Synchronize website database to the aggregators. My doubts are: 1. How to add components in our websites? The components should use Java or Ruby or adapter from java to ruby. It is possible using ruby invoke web service. I think it should work since it is the point of web service. If so, can ruby call those services in wsdl directly? But how to deal with those different data structure? How to synchroinze our database to the aggregators. I think the best way is also through web service invocation such as upload. That means, we have to export the db records into xml files and then write some tools to upload. The web service project support MTOM. So, it is fine to upload huge data. Am I on the right record? Can anybody give me some hint. Thanks.

    Read the article

  • Hosting images from unsecured servers (travelnow.com)

    - by i.am.not.aids
    Hi, My application needs to serve images hosted in travelnow.com (ie. this image) but the application only allow images hosted on a secured server (ie. https). What are my options? TravelNow's suggestion is as follows. How do I do this? Akamai image servers are not secure. Therefore you are unable to serve any of the image urls with a secure HTTPS URL. If you need to serve an image with HTTPS, you must temporarily save the image to your own secure server. This is suggested only for images to be saved as you use them or need them temporarily on the secure page. The hotel images file available from the Affiliate Center provides up to 1.5 million URLs at any time for all properties storing images in the Akamai system. It is not recommended or advised to store all files in advance on your own system since properties change and update images frequently. Although we are not responsible for the images each property stores on the Akamai system, YOU will be responsible for any customer issues arising from displaying outdated or saved image files on your own pages. Thanks! Adrian

    Read the article

  • Trusted Folder/Drive Picker in the Browser

    - by kylepfritz
    I'd like to write a Folder/Drive picker the runs in the browser and allows a user to select files to upload to a webservice. The primary usage would be selecting folders or a whole CD and uploading them to the web with their directory structure in tact. I'm imagining something akin to Jumploader but which automatically enumerates external drives and CDs. I remember a version of Facebook's picture uploader that could do this sort of enumeration and was java-based but it has since been replaced by a much slicker plugin-based architecture. Because the application needs to run at very high trust, I think I'm limited to old-school java applets. Is there another alternative? I'm hesitant to start down the plugin route because of the necessity of writing one for both IE and Mozilla at a minimum. Are there good places to get started there? On the applet front, I built a clunky prototype to demonstrate that I can enumerate devices and list files. It runs fine in the applet viewer but I don't think I have the security settings configured correctly for it to run in the browser at full trust. Currently I don't get any drives back when I run it in the browser. Applet Prototype: public class Loader extends javax.swing.JApplet { ... private void EnumerateDrives(java.awt.event.ActionEvent evt) { File[] roots = File.listRoots(); StringBuilder b = new StringBuilder(); for (File root : roots) { b.append(root.getAbsolutePath() + ", "); } jLabel.setText(b.toString()); } } Embed Html: <p>Loader:</p> <script src="http://www.java.com/js/deployJava.js" type="text/javascript" ></script> <script> var attributes = {code:'org.exampl.Loader.Loader.class', archive:'Loader/dist/Loader.jar', width:600, height:400} ; var parameters = {}; deployJava.runApplet(attributes, parameters, '1.6');

    Read the article

  • Document management, SCM ?

    - by tsunade
    Hello, This might not be a hard core programming question, but it's related to some of the tools used by programmers I suspect. So we're a bunch of people each with a bunch of documents and a bunch of different computers on a bunch of operating systems (well, only 2, linux and windows). The best way these documents can be stored/managed is if they were available offline (the laptop might not always be online) but also synchronized between all the machines. Having a server with extra reliable storage be a "base repository" seems like a good idea to me. Using a SCM comes to my mind and I've tried Subversion, and it seems to be a good thing that it uses a centralized repository - but: When checking out the total size of the checkout is roughly double the original size. Big files or big repositories seem to slow it down. Also I've tried rsync, which might work - but it's a bit rough when it comes to the potential conflict. Finally I've tried Unison (which is a wrapping of rsync, I think) and while it works it becomes horribly slow for the big directories we have here since it has to scan everything. So the question is - is there a SCM tool out there that is actually practial to use for a big bunch of both small and big files? If thats a NO - does anyone know other tools that do this job? Thanks for reading :)

    Read the article

  • What makes an effective UI for displaying versioning of structured hierarchical data

    - by Fadrian Sudaman
    Traditional version control system are displaying versioning information by grouping Projects-Folders-Files with Tree view on the left and details view on the right, then you will click on each item to look at revision history for that configuration history. Assuming that I have all the historical versioning information available for a project from Object-oriented model perspective (e.g. classes - methods - parameters and etc), what do you think will be the most effective way to present such information in UI so that you can easily navigate and access the snapshot view of the project and also the historical versioning information? Put yourself in the position that you are using a tool like this everyday in your job like you are currently using SVN, SS, Perforce or any VCS system, what will contribute to the usability, productivity and effectiveness of the tool. I personally find the classical way for display folders and files like above are very restrictive and less effective for displaying deep nested logical models. Assuming that this is a greenfield project and not restricted by specific technology, how do you think I should best approach this? I am looking for idea and input here to add values to my research project. Feel free to make any suggestions that you think is valuable. Thanks again for anyone that shares their thoughts.

    Read the article

  • Interesting LinqToSql behaviour

    - by Ben Robinson
    We have a database table that stores the location of some wave files plus related meta data. There is a foreign key (employeeid) on the table that links to an employee table. However not all wav files relate to an employee, for these records employeeid is null. We are using LinqToSQl to access the database, the query to pull out all non employee related wav file records is as follows: var results = from Wavs in db.WaveFiles where Wavs.employeeid == null; Except this returns no records, despite the fact that there are records where employeeid is null. On profiling sql server i discovered the reason no records are returned is because LinqToSQl is turning it into SQL that looks very much like: SELECT Field1, Field2 //etc FROM WaveFiles WHERE 1=0 Obviously this returns no rows. However if I go into the DBML designer and remove the association and save. All of a sudden the exact same LINQ query turns into SELECT Field1, Field2 //etc FROM WaveFiles WHERE EmployeeID IS NULL I.e. if there is an association then LinqToSql assumes that all records have a value for the foreign key (even though it is nullable and the property appears as a nullable int on the WaveFile entity) and as such deliverately constructs a where clause that will return no records. Does anyone know if there is a way to keep the association in LinqToSQL but stop this behaviour. A workaround i can think of quickly is to have a calculated field called IsSystemFile and set it to 1 if employeeid is null and 0 otherwise. However this seems like a bit of a hack to work around strange behaviour of LinqToSQl and i would rather do something in the DBML file or define something on the foreign key constraint that will prevent this behaviour.

    Read the article

  • C++ Header file questions

    - by Karl
    So I'm trying to learn C++ and I've gotten as far as using header files. They really make no sense to me. I've tried many combinations of this but nothing so far has worked: Main.cpp: #include "test.h" int main() { testClass Player1; return 0; } test.h: #ifndef TEST_H_INCLUDED #define TEST_H_INCLUDED class testClass { private: int health; public: testClass(); ~testClass(); int getHealth(); void setHealth(int inH); }; #endif // TEST_H_INCLUDED test.cpp: #include "test.h" testClass::testClass() { health = 100; } testClass::~testClass() {} int testClass::getHealth() { return(health); } void testClass::setHealth(int inH) { health = inH; } What I'm trying to do is pretty simple, but the way the header files work just makes no sense to me at all. Code blocks returns the following on build: obj\Debug\main.o(.text+0x131)||In function main':| *voip*\test\main.cpp |6|undefined reference totestClass::testClass()'| obj\Debug\main.o(.text+0x13c):voip\test\main.cpp|7|undefined reference to `testClass::~testClass()'| ||=== Build finished: 2 errors, 0 warnings ===| I'd appreciate any help. Or if you have a decent tutorial for it, that would be fine too (most of the tutorials I've googled haven't helped)

    Read the article

  • Should I use a regular server instead of AWS?

    - by Jon Ramvi
    Reading about and using the Amazon Web Services, I'm not really able to grasp how to use it correctly. Sorry about the long question: I have a EC2 instance which mostly does the work of a web server (apache for file sharing and Tomcat with Play Framework for the web app). As it's a web server, the instance is running 24/7. It just came to my attention that the data on the EC2 instance is non persistent. This means I lose my database and files if it's stopped. But I guess it also means my server settings and installed applications are lost as they are just files in the same way as the other data. This means that I will either have to rewrite the whole app to use amazon CloudDB or write some code which stores the db on S3 and make my own AMI with the correct applications installed and configured. Or can this be quick-fixed by using EBS somehow? My question is 1. is my understanding of aws is correct? and 2. is it's worth it? It could be a possibility to just set up a regular dedicated server where everything is persistent, as you would expect. Would love to have the scaleability of aws though..

    Read the article

  • Python: Serial Transmission

    - by Silent Elektron
    I have an image stack of 500 images (jpeg) of 640x480. I intend to make 500 pixels (1st pixels of all images) as a list and then send that via COM1 to FPGA where I do my further processing. I have a couple of questions here: How do I import all the 500 images at a time into python and how do i store it? How do I send the 500 pixel list via COM1 to FPGA? I tried the following: Converted the jpeg image to intensity values (each pixel is denoted by a number between 0 and 255) in MATLAB, saved the intensity values in a text file, read that file using readlines(). But it became too cumbersome to make the intensity value files for all the 500 images! Used NumPy to put the read files in a matrix and then pick the first pixel of all images. But when I send it, its coming like: [56, 61, 78, ... ,71, 91]. Is there a way to eliminate the [ ] and , while sending the data serially? Thanks in Advance! :)

    Read the article

  • How to efficiently store and update binary data in Mongodb?

    - by Rocketman
    I am storing a large binary array within a document. I wish to continually add bytes to this array and sometimes change the value of existing bytes. I was looking for some $append_bytes and $replace_bytes type of modifiers but it appears that the best I can do is $push for arrays. It seems like this would be doable by performing seek-write type operations if I had access somehow to the underlying bson on disk, but it does not appear to me that there is anyway to do this in mongodb (and probably for good reason). If I were instead to just query this binary array, edit or add to it, and then update the document by rewriting the entire field, how costly will this be? Each binary array will be on the order of 1-2MB, and updates occur once every 5 minutes and across 1000s of documents. Worse, yet there is no easy way to spread these out (in time) and they will usually be happening close to one another on the 5 minute intervals. Does anyone have a good feel for how disastrous this will be? Seems like it would be problematic. An alternative would be to store this binary data as separate files on disk, implement a thread pool to efficiently manipulate the files on disk, and reference the filename from my mongodb document. (I'm using python and pymongo so I was looking at pytables). I'd prefer to avoid this though if possible. Is there any other alternative that I am overlooking here? Thanks in advnace.

    Read the article

  • How do I get Spotlight attributes to display in the get info window?

    - by Alexander Rauchfuss
    I have created a spotlight importer for comic files. The attributes are successfully imported and searchable. The one thing that remains is getting the attributes to display in a file's get info window. It seems that this should be a simple matter of editing the schema.xml file so the attributes are nested inside displayattrs tags. Unfortunately this does not seem to be working. I simplified the plugin for testing. The following are all of the important files. schema.xml <types> <type name="cx.c3.cbz-archive"> <allattrs> kMDItemTitle kMDItemAuthors </allattrs> <displayattrs> kMDItemTitle kMDItemAuthors </displayattrs> </type> <type name="cx.c3.cbr-archive"> <allattrs> kMDItemTitle kMDItemAuthors </allattrs> <displayattrs> kMDItemTitle kMDItemAuthors </displayattrs> </type> GetMetadataForFile.m Boolean GetMetadataForFile(void* thisInterface, CFMutableDictionaryRef attributes, CFStringRef contentTypeUTI, CFStringRef pathToFile) { NSAutoreleasePool * pool = [NSAutoreleasePool new]; NSString * file = (NSString *)pathToFile; NSArray * authors = [[UKXattrMetadataStore stringForKey: @"com_opencomics_authors" atPath: file traverseLink: NO] componentsSeparatedByString: @","]; [(NSMutableDictionary *)attributes setObject: authors forKey: (id)kMDItemAuthors]; NSString * title = [UKXattrMetadataStore stringForKey: @"com_opencomics_title" atPath: file traverseLink: NO]; [(NSMutableDictionary *)attributes setObject: title forKey: (id)kMDItemTitle]; [pool release]; return true; }

    Read the article

  • C# DynamicPDF Merging causing "Index out of bounds" error

    - by Dining Philanderer
    Greetings, We use DynamicPDF to merge multiple PDF documents stored in a MSSQL database. The vast majority of times it works wonderfully, but occasionally one of these documents will fail to merge generating the exception message "Index was outside the bounds of the array." I think I have isolated the problem to PDF files that are greater than 8.5 x 11.0. Does anyone know if this is a known issue with DynamicPDF? The merging code is posted here. What would be ideal is if there is a way to resize the PDF files to the correct size so this is not a concern at all... for (int docs = 0; docs < dsPDFInfo.Tables[0].Rows.Count; docs++) { byte[] bytePDFArray = (byte[])dsPDFInfo.Tables[0].Rows[docs]["Content"]; int iContentSize = Convert.ToInt32(dsPDFInfo.Tables[0].Rows[docs]["ContentSize"]); MemoryStream ms = new MemoryStream(bytePDFArray, 0, iContentSize); ceTe.DynamicPDF.Merger.PdfDocument pdfdoc = new ceTe.DynamicPDF.Merger.PdfDocument(ms); ceTe.DynamicPDF.Merger.MergeDocument mergedoc = new ceTe.DynamicPDF.Merger.MergeDocument(pdfdoc); docCombinedPDF.Append(mergedoc); } Thanks....

    Read the article

  • Trying to make a plugin system in C++

    - by Pirate for Profit
    I'm making a task-based program that needs to have plugins. Tasks need to have properties which can be easily edited, I think this can be done with Qt's Meta-Object Compiler reflection capabilities (I could be wrong, but I should be able to stick this in a QtPropertyBrowser?) So here's the base: class Task : public QObject { Q_OBJECT public: explicit Task(QObject *parent = 0) : QObject(parent){} virtual void run() = 0; signals: void taskFinished(bool success = true); } Then a plugin might have this task: class PrinterTask : public Task { Q_OBJECT public: explicit PrinterTask(QObject *parent = 0) : Task(parent) {} void run() { Printer::getInstance()->Print(this->getData()); // fictional emit taskFinished(true); } inline const QString &getData() const; inline void setData(QString data); Q_PROPERTY(QString data READ getData WRITE setData) // for reflection } In a nutshell, here's what I want to do: // load plugin // find all the Tasks interface implementations in it // have user able to choose a Task and edit its specific Q_PROPERTY's // run the TASK It's important that one .dll has multiple tasks, because I want them to be associated by their module. For instance, "FileTasks.dll" could have tasks for deleting files, making files, etc. The only problem with Qt's plugin setup is I want to store X amount of Tasks in one .dll module. As far as I can tell, you can only load one interface per plugin (I could be wrong?). If so, the only possible way to do accomplish what I want is to create a FactoryInterface with string based keys which return the objects (as in Qt's Plug-And-Paint example), which is a terrible boilerplate that I would like to avoid. Anyone know a cleaner C++ plugin architecture than Qt's to do what I want? Also, am I safely assuming Qt's reflection capabilities will do what I want (i.e. able to edit an unknown dynamically loaded tasks' properties with the QtPropertyBrowser before dispatching)?

    Read the article

  • SQL Latest photos from contacts (grouped by contact)

    - by kitsched
    Hello, To short version of this question is that I want to accomplish something along the lines of what's visible on Flickr's homepage once you're logged in. It shows the three latest photos of each of your friends sorted by date but grouped by friend. Here's a longer explanation: For example I have 3 friends: John, George and Andrea. The list I want to extract should look like this: George Photo - 2010-05-18 Photo - 2010-05-18 Photo - 2010-05-12 John Photo - 2010-05-17 Photo - 2010-05-14 Photo - 2010-05-12 Andrea Photo - 2010-05-15 Photo - 2010-05-15 Photo - 2010-05-15 Friend with most recent photo uploaded is on top but his or her 2 next files follow. I'd like to do this from MySQL, and for the time being I got here: SELECT photos.user_id, photos.id, photos.date_uploaded FROM photos WHERE photos.user_id IN (SELECT user2_id FROM user_relations WHERE user1_id = 8) ORDER BY date_uploaded DESC Where user1_id = 8 is the currently logged in user and user2_id are the ids of friends. This query indeed returns the latest files uploaded by the contacts of the user with id = 8 sorted by date. However I'd like to accomplish the grouping and limiting mentioned above. Hopefully this makes sense. Thank you in advance.

    Read the article

  • Database.ExecuteNonQuery does not return

    - by dan-waterbly
    I have a very odd issue. When I execute a specific database stored procedure from C# using SqlCommand.ExecuteNonQuery, my stored procedure is never executed. Furthermore, SQL Profiler does not register the command at all. I do not receive a command timeout, and no exeception is thrown. The weirdest thing is that this code has worked fine over 1,200,000 times, but for this one particular file I am inserting into the database, it just hangs forever. When I kill the application, I receive this error in the event log of the database server: "A fatal error occurued while reading the input stream from the network. The session will be terminated (input error: 64, output error: 0). Which makes me think that the database server is receiving the command, though SQL Profiler says otherwise. I know that the appropiate permissions are set, and that the connection string is right as this piece of code and stored procedure works fine with other files. Below is the code that calls the stored procedure, it may be important to note that the file I am trying to insert is 33.5MB, but I have added more than 10,000 files larger than 500MB, so I do not think the size is the issue: using (SqlConnection sqlconn = new SqlConnection(ConfigurationManager.ConnectionStrings["TheDatabase"].ConnectionString)) using (SqlCommand command = sqlconn.CreateCommand()) { command.CommandText = "Add_File"; command.CommandType = CommandType.StoredProcedure; command.CommandTimeout = 30 //should timeout in 30 seconds, but doesn't... command.Parameters.AddWithValue("@ID", ID).SqlDbType = SqlDbType.BigInt; command.Parameters.AddWithValue("@BinaryData", byteArr).SqlDbType = SqlDbType.VarBinary; command.Parameters.AddWithValue("@FileName", fileName).SqlDbType = SqlDbType.VarChar; sqlconn.Open(); command.ExecuteNonQuery(); } There is no firewall between the server making the call and the database server, and the windows firewalls have been disabled to troubleshoot this issue.

    Read the article

  • Java conditional compilation: how to prevent code chunks to be compiled?

    - by khachik
    My project requires Java 1.6 for compilation and running. Now I have a requirement to make it working with Java 1.5 (from the marketing side). I want to replace method body (return type and arguments remain the same) to make it compiling with Java 1.5 without errors. Details: I have an utility class called OS which encapsulates all OS-specific things. It has a method public static void openFile(java.io.File file) throws java.io.IOException { // open the file using java.awt.Desktop ... } to open files like with double-click (start Windows command or open Mac OS X command equivalent). Since it cannot be compiled with Java 1.5, I want to exclude it during compilation and replace by another method which calls run32dll for Windows or open for Mac OS X using Runtime.exec. Question: How can I do that? Can annotations help here? Note: I use ant, and I can make two java files OS4J5.java and OS4J6.java which will contain the OS class with the desired code for Java 1.5 and 1.6 and copy one of them to OS.java before compiling (or an ugly way - replace the content of OS.java conditionally depending on java version) but I don't want to do that, if there is another way. Elaborating more: in C I could use ifdef, ifndef, in Python there is no compilation and I could check a feature using hasattr or something else, in Common Lisp I could use #+feature. Is there something similar for Java? Found this post but it doesn't seem to be helpful. Any help is greatly appreciated. kh.

    Read the article

< Previous Page | 990 991 992 993 994 995 996 997 998 999 1000 1001  | Next Page >