Search Results

Search found 10026 results on 402 pages for 'word documents'.

Page 168/402 | < Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >

  • Is it possible to execute a function in Mongo that accepts any parameters?

    - by joshua.clayton
    I'm looking to write a function to do a custom query on a collection in Mongo. Problem is, I want to reuse that function. My thought was this (obviously contrived): var awesome = function(count) { return function() { return this.size == parseInt(count); }; } So then I could do something along the lines of: db.collection.find(awesome(5)); However, I get this error: error: { "$err" : "error on invocation of $where function: JS Error: ReferenceError: count is not defined nofile_b:1" } So, it looks like Mongo isn't honoring scope, but I'm really not sure why. Any insight would be appreciated. To go into more depth of what I'd like to do: A collection of documents has lat/lng values, and I want to find all documents within a concave or convex polygon. I have the function written but would ideally be able to reuse the function, so I want to pass in an array of points composing my polygon to the function I execute on Mongo's end. I've looked at Mongo's geospatial querying and it currently on supports circle and box queries - I need something more complex.

    Read the article

  • SharePoint Item Event Receivers and Site Creation

    - by Michael Edwards
    I have created an Item Event Receiver for a document library and I have test that the logic works correctly and it all does. The next thing I wanted to do is automatically create the list when a site is created so I added the list to the ONET.xml file for the site: <Lists> <List Title="Documents" Description="Documents " url="MyDocumentLibrary" Type="10002" FeatureId="CFD8504D-70EB-4ba2-9CCB-52E38DB39E60" QuickLaunchUrl="Docs/AllItems.aspx" /> </Lists> And I ensure that the feature for this list is also activated be adding the feature to the <WebFeatures> <Feature ID="CFD8504D-70EB-4ba2-9CCB-52E38DB39E60" /> </WebFeatures> The problem occurs after I create the site, when I add a document to the list the Item Event Receiver does not run. However if I manually for to the web site features and deactivate and then reactivate the feature the Item Event Receiver does run. It seems that when creating a list through the ONET.xml and activating the feature it does not bind the Item Event Receiver to the list. What is the work around for this? Is this a bug?

    Read the article

  • Use continue or Checked Exceptions when checking and processing objects

    - by Johan Pelgrim
    I'm processing, let's say a list of "Document" objects. Before I record the processing of the document successful I first want to check a couple of things. Let's say, the file referring to the document should be present and something in the document should be present. Just two simple checks for the example but think about 8 more checks before I have successfully processed my document. What would have your preference? for (Document document : List<Document> documents) { if (!fileIsPresent(document)) { doSomethingWithThisResult("File is not present"); continue; } if (!isSomethingInTheDocumentPresent(document)) { doSomethingWithThisResult("Something is not in the document"); continue; } doSomethingWithTheSucces(); } Or for (Document document : List<Document> documents) { try { fileIsPresent(document); isSomethingInTheDocumentPresent(document); doSomethingWithTheSucces(); } catch (ProcessingException e) { doSomethingWithTheExceptionalCase(e.getMessage()); } } public boolean fileIsPresent(Document document) throws ProcessingException { ... throw new ProcessingException("File is not present"); } public boolean isSomethingInTheDocumentPresent(Document document) throws ProcessingException { ... throw new ProcessingException("Something is not in the document"); } What is more readable. What is best? Is there even a better approach of doing this (maybe using a design pattern of some sort)? As far as readability goes my preference currently is the Exception variant... What is yours?

    Read the article

  • How do I ensure only the headers are shown for the first item in an ItemsControl in WPF?

    - by Dan Ryan
    I am using MVVM binding an ObservableCollection of children to an ItemsControl. The ItemsControl contains a UserControl used to style the UI for the children. <ItemsControl ItemsSource="{Binding Documents}"> <ItemsControl.ItemTemplate> <DataTemplate> <View:DocumentView Margin="0, 10, 0, 0" /> </DataTemplate> </ItemsControl.ItemTemplate> </ItemsControl> I want to show a header row for the contents of the ItemsControl but only want to show this once at the top (not for every child). How can I implement this behaviour in the DocumentView user control? Fyi I am using a Grid layout to style the child rows: <Grid> <Grid.ColumnDefinitions> <ColumnDefinition Width="34"/> <ColumnDefinition Width="100"/> <ColumnDefinition Width="*" /> <ColumnDefinition Width="60" /> </Grid.ColumnDefinitions> <Grid.RowDefinitions> <RowDefinition /> <RowDefinition /> </Grid.RowDefinitions> <TextBlock Grid.ColumnSpan="4" Grid.Row="0" Text="Should only show this at the top"></TextBlock> <Image Grid.Column="0" Grid.Row="1" Height="24" Width="24" Source="/Beazley.Documents.Presentation;component/Icons/error.png"></Image> <ComboBox Grid.Column="1" Grid.Row="1" Name="ContentTypes" ItemsSource="{Binding RelativeSource={RelativeSource FindAncestor, AncestorType={x:Type View:MainView}}, Path=DataContext.ContentTypes}" SelectedValue="{Binding ContentType}"/> <TextBox Grid.Column="2" Grid.Row="1" Text="{Binding Path=FileName}"/> <Button Grid.Column="3" Grid.Row="1" Command="{Binding RelativeSource={RelativeSource FindAncestor, AncestorType={x:Type View:MainView}}, Path=DataContext.RemoveFile}" CommandParameter="{Binding}">Remove</Button> </Grid>

    Read the article

  • NSString writeToFile operation couldn't be completed

    - by Chonch
    Hey, I have an xml file in my application bundle. I want to copy it to the documents folder at installation and then, every time the app launches, get the newest version of this file from the Internet. I use this code: // Check if the file exists in the documents folder NSString *documentsPath = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) objectAtIndex:0]; if (![[NSFileManager defaultManager] fileExistsAtPath:[documentsPath stringByAppendingPathComponent:@"fileName.xml"]]) // If not, copy it there (from the bundle) [[NSFileManager defaultManager] copyItemAtPath:[[[NSBundle mainBundle] resourcePath] stringByAppendingPathComponent:@"OriginalFile.xml"] toPath:[documentsPath stringByAppendingPathComponent:@"fileName.xml"] error:nil]; // Get the newest version of the file from the server NSURL *url=[[NSURL alloc] initWithString:@"http://www.sitename.com/webservice.asmx/webserviceName"]; NSString *results = [[NSString alloc] initWithContentsOfURL:url]; // Replace the current version with the newest one, only if it is valid if (results != nil) [results writeToFile:[documentsPath stringByAppendingPathComponent:@"fileName.xml"] atomically:NO encoding:NSStringEncodingConversionAllowLossy error:nil]; The problem is that the writeToFile command always returns NO and the file's contents remain identical to the original file I included in my app bundle. I checked the value of results and it's correct. I also made sure that the app does perform the writeToString command, but still, it always returns NO. Can anybody tell me what I'm doing wrong? Thanks,

    Read the article

  • How to handle authenticated user access to resources in document oriented system?

    - by Jeremy Raymond
    I'm developing a document oriented application and need to manage user access to the documents. I have a module that handles user authentication, and another module that handles document CRUD operations on the data store. Once a user is authenticated I need to enforce what operations the user can and cannot perform to documents based upon the user's permissions. The best option I could think of to integrate these two pieces together would be to create another module that duplicates the data API but that also takes the authenticated user as a parameter. The module would delegate the authorization check to the auth module and delegate the document operation to the data access module. Something like: -module(auth_data_access). % User is authenticated (logged into the system) % save_doc validates if user is allowed to save the given document and if so % saves it returning ok, else returns {error, permission_denied} save_doc(Doc, User) -> case auth:save_allowed(Doc, User) of ok -> data_access:save_doc(Doc); denied -> {error, permission_denied} end end. Is there a better way I can handle this?

    Read the article

  • Does WordPress clear $GLOBALS ?

    - by Brayn
    Hey What I want to do is to include one of my PHP scripts in a Word Press theme. The problem is that after I include the script file I can't access, inside functions in the theme file, variables declared in the script file . I have created a new file in the theme folder and added the same code as in header.php and if I open that file it works just fine. So as far as I can tell it's something Word Press related. /other/path/wordpress/wp-content/themes/theme-name/header.php // this is broken /other/path/wordpress/wp-content/themes/theme-name/test.php // this works /var/www/vhosts/domain/wordpress/ ->(symlink)-> /other/path/wordpress/ /other/path/wordpress/wp-content/themes/theme-name/header.php /var/www/vhosts/domain/include_file.php Content of: /var/www/vhosts/domain/include_file.php $global_var = 'global'; print_r($GLOBALS); // if I open this file directly this prints globals WITH $global_var; // if this file is included in header this prints all the WP stuff WITHOUT $global_var; Content of: /other/path/wordpress/wp-content/themes/theme-name/header.php require '/path/to/include_file.php'; print $global_var; // this prints 'global' as expected function test() { global $global_var; print $global_var; // this is NULL } test(); print_r($GLOBALS); // this prints all the WP stuff WITHOUT $global_var in it

    Read the article

  • Prevent Text Input expanding in IE

    - by Caroline
    Hi, I am having a problem with an input field in IE. The code is for a portlet and widths need to be dynamic as the user can place the portlet on any of the three columns in the page which all have different widths. As always it works fine in FF but not in IE. In order to make the width dyanaic I have set width="100%". Data to populate the text input comes from a DB. When the page is rendered if there is a long URL the text input expands to fill the contents in IE but in FF it just stays the same width (ie 100% of the TD that it lives in). How can I stop IE from changing the width in order to fit the contents. Setting the width to a fixed width of 100px fixes the issue but I need to have the width as a percentage in order to accommodate the layout of the portlet wherever it is put in on the page. I have tried overflow:hidden and word-wrap:break-word but I cant get either to work. Here is my input code and style sheets <td class="right" > <input type="text" class="validate[custom[url]]" value="" id="linkText" name="communicationLink" maxlength="500" maxsize="100" /> </td> .ofCommunicationsAdmin input { font-family: "Trebuchet MS"; font-size: 11px; font-weight:normal; color:#333333; overflow:hidden; } .ofCommunicationsAdmin #linkText { overflow:hidden; width:100%; border:1px #cccccc solid; background:#F4F7ED top repeat-x; } .ofCommunicationsAdmin td.right { vertical-align: top; text-align: left; }

    Read the article

  • How to sort HashSet() function data in sequence?

    - by vincent low
    I am new to Java, the function I would like to perform is to load a series of data from a file, into my hashSet() function. the problem is, I able to enter all the data in sequence, but I can't retrieve it out in sequence base on the account name in the file. Can anyone help? below is my code: public Set retrieveHistory(){ Set dataGroup = new HashSet(); try{ File file = new File("C:\\Documents and Settings\\vincent\\My Documents\\NetBeansProjects\\vincenttesting\\src\\vincenttesting\\vincenthistory.txt"); BufferedReader br = new BufferedReader(new FileReader(file)); String data = br.readLine(); while(data != null){ System.out.println("This is all the record:"+data); Customer cust = new Customer(); //break the data based on the , String array[] = data.split(","); cust.setCustomerName(array[0]); cust.setpassword(array[1]); cust.setlocation(array[2]); cust.setday(array[3]); cust.setmonth(array[4]); cust.setyear(array[5]); cust.setAmount(Double.parseDouble(array[6])); cust.settransaction(Double.parseDouble(array[7])); dataGroup.add(cust); //then proced to read next customer. data = br.readLine(); } br.close(); }catch(Exception e){ System.out.println("error" +e); } return dataGroup; } public static void main(String[] args) { FileReadDataModel fr = new FileReadDataModel(); Set customerGroup = fr.retrieveHistory(); System.out.println(e); for(Object obj : customerGroup){ Customer cust = (Customer)obj; System.out.println("Cust name :" +cust.getCustomerName()); System.out.println("Cust amount :" +cust.getAmount()); }

    Read the article

  • Visual Studio identical token highlighting

    - by dsteinweg
    I coded a Mancala game in Java for a college class this past spring, and I used the Eclipse IDE to write it. One of the great (and fairly simple) visual aids in Eclipse is if you select a particular token, say a declared variable, then the IDE will automatically highlight all other references to that token on your screen. Notepad++, my preferred Notepad replacement, also does this. Another neat and similar feature in Eclipse was the vertical "error bar" to the right of your code (not sure what to call it). It display little red boxes for all of the syntax errors in your document, yellow boxes for warnings like "variable declared but not used", and if you select a word, boxes appear in the bar for each occurance of the word in the document. A screenshot of these features in action: After a half hour of searching, I've determined that Visual Studio cannot do this on its own, so my question is: does anyone know of any add-ins for 2005 or 2008 that can provide either one of the aforementioned features? Being able to highlight the current line your cursor is on would be nice too. I believe the add-in ReSharper can do this, but I'd prefer to use a free add-in rather than purchase one.

    Read the article

  • php parsing speed optimization

    - by Arnaud
    I would like to add tooltip or generate link according to the element available in the database, for exemple if the html page printed is: to reboot your linux host in single-user mode you can ... I will use explode(" ", $row[page]) and the idea is now to lookup for every single word in the page to find out if they have a related referance in this exemple let's say i've got a table referance an one entry for reboot and one for linux reboot: restart a computeur linux: operating system now my output will look like (replaced < and by @) to @a href="ref/reboot"@reboot@/a@ your @a href="ref/linux"@linux@/a@ host in single-user mode you can ... Instead of have a static list generated when I saved the content, if I add more keyword in the future, then the text will become more interactive. My main concerne and question is how can I create a efficient enough process to do it ? Should I store all the db entry in an array and compare them ? Do an sql query for each word (seems to be crazy) Dump the table in a file and use a very long regex or a "grep -f pattern data" way of doing it? Or or or or I'm sure it must be a better way of doing it, just don't have a clue about it, or maybe this will be far too resource un-friendly and I should avoid doing such things. Cheers!

    Read the article

  • Compiling scipy on Windows 32-bit

    - by Sridhar Ratnakumar
    Has anyone tried compiling SciPy on Windows using numpy-1.3.0 that was built with the pre-built ATLAS libraries (atlas3.6.0_WinNT_P4SSE2.zip) linked in the installation document. I get the following linker error, and have no ideas as to how to fix this issue. $ python setup.py config --compiler=mingw32 build --compiler=mingw32 install --root=i [...] creating build\temp.win32-2.6\Release creating build\temp.win32-2.6\Release\scipy creating build\temp.win32-2.6\Release\scipy\integrate compile options: '-DNO_ATLAS_INFO=2 -I"C:\Documents and Settings\apy\Application Data\Python\Python26\site-packages\numpy\core\inc lude" -IC:\Python26\include -IC:\Python26\PC -c' gcc -mno-cygwin -O2 -Wall -Wstrict-prototypes -DNO_ATLAS_INFO=2 -I"C:\Documents and Settings\apy\Application Data\Python\Python26\ site-packages\numpy\core\include" -IC:\Python26\include -IC:\Python26\PC -c scipy\integrate\_odepackmo dule.c -o build\temp.win32-2.6\Release\scipy\integrate\_odepackmodule.o C:\MinGW\bin\g77.exe -g -Wall -mno-cygwin -g -Wall -mno-cygwin -shared build\temp.win32-2.6\Release\scipy\integrate\_odepackmodule .o -LC:\atlas3.6.0_WinNT_P4SSE2 -LC:\MinGW\lib -LC:\MinGW\lib\gcc\mingw32\3.4.5 -LC:\Python26\libs -LC:\Act ivePython32Python26\PCbuild -Lbuild\temp.win32-2.6 -lodepack -llinpack_lite -lmach -latlas -lcblas -lf77blas -llapack -lpython26 - lg2c -o build\lib.win32-2.6\scipy\integrate\_odepack.pyd C:\atlas3.6.0_WinNT_P4SSE2/libf77blas.a(ATL_F77wrap_daxpy.o):ATL_F77wrap_axpy.c:(.text+0x3c): undefined reference to `ATL _daxpy' C:\atlas3.6.0_WinNT_P4SSE2/libf77blas.a(ATL_F77wrap_dscal.o):ATL_F77wrap_scal.c:(.text+0x26): undefined reference to `ATL _dscal' C:\atlas3.6.0_WinNT_P4SSE2/libf77blas.a(ATL_F77wrap_dcopy.o):ATL_F77wrap_copy.c:(.text+0x3d): undefined reference to `ATL _dcopy' C:\atlas3.6.0_WinNT_P4SSE2/libf77blas.a(ATL_F77wrap_idamax.o):ATL_F77wrap_amax.c:(.text+0x1e): undefined reference to `AT L_idamax' C:\atlas3.6.0_WinNT_P4SSE2/libf77blas.a(ATL_F77wrap_ddot.o):ATL_F77wrap_dot.c:(.text+0x36): undefined reference to `ATL_d dot' collect2: ld returned 1 exit status error: Command "C:\MinGW\bin\g77.exe -g -Wall -mno-cygwin -g -Wall -mno-cygwin -shared build\temp.win32-2.6\Release\scipy\integrat e\_odepackmodule.o -LC:\atlas3.6.0_WinNT_P4SSE2 -LC:\MinGW\lib -LC:\MinGW\lib\gcc\mingw32\3.4.5 -LC:\Python 26\libs -LC:\Python26\PCbuild -Lbuild\temp.win32-2.6 -lodepack -llinpack_lite -lmach -latlas -lcblas -lf77blas -llap ack -lpython26 -lg2c -o build\lib.win32-2.6\scipy\integrate\_odepack.pyd" failed with exit status 1 Does anyone know what could have gone wrong here?

    Read the article

  • Need help implementing this algorithm with map Hadoop MapReduce

    - by Julia
    Hi all! i have algorithm that will go through a large data set read some text files and search for specific terms in those lines. I have it implemented in Java, but I didnt want to post code so that it doesnt look i am searching for someone to implement it for me, but it is true i really need a lot of help!!! This was not planned for my project, but data set turned out to be huge, so teacher told me I have to do it like this. EDIT(i did not clarified i previos version)The data set I have is on a Hadoop cluster, and I should make its MapReduce implementation I was reading about MapReduce and thaught that i first do the standard implementation and then it will be more/less easier to do it with mapreduce. But didnt happen, since algorithm is quite stupid and nothing special, and map reduce...i cant wrap my mind around it. So here is shortly pseudo code of my algorithm LIST termList (there is method that creates this list from lucene index) FOLDER topFolder INPUT topFolder IF it is folder and not empty list files (there are 30 sub folders inside) FOR EACH sub folder GET file "CheckedFile.txt" analyze(CheckedFile) ENDFOR END IF Method ANALYZE(CheckedFile) read CheckedFile WHILE CheckedFile has next line GET line FOR(loops through termList) GET third word from line IF third word = term from list append whole line to string buffer ENDIF ENDFOR END WHILE OUTPUT string buffer to file Also, as you can see, each time when "analyze" is called, new file has to be created, i understood that map reduce is difficult to write to many outputs??? I understand mapreduce intuition, and my example seems perfectly suited for mapreduce, but when it comes to do this, obviously I do not know enough and i am STUCK! Please please help.

    Read the article

  • Are document-oriented databases any more suitable than relational ones for persisting objects?

    - by Owen Fraser-Green
    In terms of database usage, the last decade was the age of the ORM with hundreds competing to persist our object graphs in plain old-fashioned RMDBS. Now we seem to be witnessing the coming of age of document-oriented databases. These databases are highly optimized for schema-free documents but are also very attractive for their ability to scale out and query a cluster in parallel. Document-oriented databases also hold a couple of advantages over RDBMS's for persisting data models in object-oriented designs. As the tables are schema-free, one can store objects belonging to different classes in an inheritance hierarchy side-by-side. Also, as the domain model changes, so long as the code can cope with getting back objects from an old version of the domain classes, one can avoid having to migrate the whole database at every change. On the other hand, the performance benefits of document-oriented databases mainly appear to come about when storing deeper documents. In object-oriented terms, classes which are composed of other classes, for example, a blog post and its comments. In most of the examples of this I can come up with though, such as the blog one, the gain in read access would appear to be offset by the penalty in having to write the whole blog post "document" every time a new comment is added. It looks to me as though document-oriented databases can bring significant benefits to object-oriented systems if one takes extreme care to organize the objects in deep graphs optimized for the way the data will be read and written but this means knowing the use cases up front. In the real world, we often don't know until we actually have a live implementation we can profile. So is the case of relational vs. document-oriented databases one of swings and roundabouts? I'm interested in people's opinions and advice, in particular if anyone has built any significant applications on a document-oriented database.

    Read the article

  • Liferay Document Management System Workflow

    - by Rajkumar
    I am creating a DMS in Liferay. So far I could upload documents in Liferay in document library. And also i can see documents in document and media portlet. The problem is though status for the document is in pending state, the workflow is not started. Below is my code. Anyone please help. Very urgent. Folder folder = null; // getting folder try { folder = DLAppLocalServiceUtil.getFolder(10181, 0, folderName); System.out.println("getting folder"); } catch(NoSuchFolderException e) { // creating folder System.out.println("creating folder"); try { folder = DLAppLocalServiceUtil.addFolder(userId, 10181, 0, folderName, description, serviceContext); } catch (PortalException e3) { // TODO Auto-generated catch block e3.printStackTrace(); } catch (SystemException e3) { // TODO Auto-generated catch block e3.printStackTrace(); } } catch (PortalException e4) { // TODO Auto-generated catch block e4.printStackTrace(); } catch (SystemException e4) { // TODO Auto-generated catch block e4.printStackTrace(); } // adding file try { System.out.println("New File"); fileEntry = DLAppLocalServiceUtil.addFileEntry(userId, 10181, folder.getFolderId(), sourceFileName, mimeType, title, "testing description", "changeLog", sampleChapter, serviceContext); Map<String, Serializable> workflowContext = new HashMap<String, Serializable>(); workflowContext.put("event",DLSyncConstants.EVENT_CHECK_IN); DLFileEntryLocalServiceUtil.updateStatus(userId, fileEntry.getFileVersion().getFileVersionId(), WorkflowConstants.ACTION_PUBLISH, workflowContext, serviceContext); System.out.println("after entry"+ fileEntry.getFileEntryId()); } catch (DuplicateFileException e) { } catch (PortalException e1) { // TODO Auto-generated catch block e1.printStackTrace(); } catch (SystemException e1) { // TODO Auto-generated catch block e1.printStackTrace(); } } catch (PortalException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (SystemException e) { // TODO Auto-generated catch block e.printStackTrace(); } } return fileEntry.getFileEntryId(); } I have even used WorkflowHandlerRegistryUtil.startWorkflowInstance(companyId, userId, fileEntry.getClass().getName(), fileEntry.getClassPK, fileEntry, serviceContext); But still i have the same problem

    Read the article

  • Why does my App.Config codebase not help .NET locate my assembly?

    - by pkolodziej
    I have the following client application and its corresponding config file: namespace Chapter9 { class Program { static void Main(string[] args) { AppDomain.CurrentDomain.ExecuteAssembly("AssemblyPrivate.exe"); } } } <configuration> <runtime> <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1"> <dependentAssembly> <codeBase href="file://C:\Users\djpiter\Documents\Visual Studio 2008\Projects\70536\AssemblyPrivate\bin\Debug\AssemblyPrivate.exe"/> </dependentAssembly> </assemblyBinding> </runtime> </configuration> The AssemblyPrivate.exe does not have a public key, nor is it located in the GAC. As far as I know, the runtime should parse the app.config file before looking for an assembly in the client app directory. The unhandled exception (wrapped for readability) is: Unhandled Exception: System.IO.FileNotFoundException: Could not load file or assembly 'file:///C:\Users\djpiter\Documents\Visual Studio 2008\Projects\70536\Chapter9\bin\Debug\AssemblyPrivate.exe' or one of its dependencies. The system cannot find the file specified. Why it is not working? I need to use dynamic binding (not static). Kind Regards, PK

    Read the article

  • ExpertPDF and Caching of URLs

    - by Josh
    We are using ExpertPDF to take URLs and turn them into PDFs. Everything we do is through memory, so we build up the request and then read the stream into ExpertPDF and then write the bits to file. All the files we have been requesting so far are just plain HTML documents. Our designers update CSS files or change the HTML and rerequest the documents as PDFs, but often times, things are getting cached. Take, for example, if I rename the only CSS file and view the HTML page through a web browser, the page looks broke because the CSS doesn't exist. But if I request that page through the PDF Generator, it still looks ok, which means somewhere the CSS is cached. Here's the relevant PDF creation code: // Create a request HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create(url); request.UserAgent = "IE 8.0"; request.ContentType = "application/x-www-form-urlencoded"; request.Method = "GET"; // Send the request HttpWebResponse resp = (HttpWebResponse)request.GetResponse(); if (resp.IsFromCache) { System.Web.HttpContext.Current.Trace.Write("FROM THE CACHE!!!"); } else { System.Web.HttpContext.Current.Trace.Write("not from cache"); } // Read the response pdf.SavePdfFromHtmlStream(resp.GetResponseStream(), System.Text.Encoding.UTF8, "Output.pdf"); When I check the trace file, nothing is being loaded from cache. I checked the IIS log file and found a 200 response coming from the request, even after a file had been updated (I would expect a 302). We've tried putting the No-Cache attribute on all HTML pages, but still no luck. I even turned off all caching at the IIS level. Is there anything in ExpertPDF that might be caching somewhere or something I can do to the request object to do a hard refresh of all resources? UPDATE I put ?foo at the end of my style href links and this updates the CSS everytime. Is there a setting someplace that can prevent stylesheets from being cached so I don't have to do this inelegant solution?

    Read the article

  • How to classify NN/NNP/NNS obtained from POS tagged document as a product feature

    - by Shweta .......
    I'm planning to perform sentiment analysis on reviews of product features (collected from Amazon dataset). I have extracted review text from the dataset and performed POS tagging on that. I'm able to extract NN/NNP as well. But my doubt is how do I come to know that extracted words classify as features of the products? I know there are classifiers in nltk but I don't know how I should use it for my project. I'm assuming there are 2 ways of finding whether the extracted word is a product feature or not. One is to compare with a bag of words and find out if my word exists in that. Doubt: How do I create/get bag of words? Second way is to implement some kind of apriori algorithm to find out frequently occurring words as features. I would like to know which method is good and how to go about implementing it. Some pointers to available softwares or code snippets would be helpful! Thanks!

    Read the article

  • How do you store accented characters coming from a web service into a database?

    - by Thierry Lam
    I have the following word that I fetch via a web service: André From Python, the value looks like: "Andr\u00c3\u00a9". The input is then decoded using json.loads: >>> import json >>> json.loads('{"name":"Andr\\u00c3\\u00a9"}') >>> {u'name': u'Andr\xc3\xa9'} When I store the above in a utf8 MySQL database, the data is stored like the following using Django: SomeObject.objects.create(name=u'Andr\xc3\xa9') Querying the name column from a mysql shell or displaying it in a web page gives: André The web page displays in utf8: <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> My database is configured in utf8: mysql> SHOW VARIABLES LIKE 'collation%'; +----------------------+-----------------+ | Variable_name | Value | +----------------------+-----------------+ | collation_connection | utf8_general_ci | | collation_database | utf8_unicode_ci | | collation_server | utf8_unicode_ci | +----------------------+-----------------+ 3 rows in set (0.00 sec) mysql> SHOW VARIABLES LIKE 'character_set%'; +--------------------------+----------------------------+ | Variable_name | Value | +--------------------------+----------------------------+ | character_set_client | utf8 | | character_set_connection | utf8 | | character_set_database | utf8 | | character_set_filesystem | binary | | character_set_results | utf8 | | character_set_server | utf8 | | character_set_system | utf8 | | character_sets_dir | /usr/share/mysql/charsets/ | +--------------------------+----------------------------+ 8 rows in set (0.00 sec) How can I retrieve the word André from a web service, store it properly in a database with no data loss and display it on a web page in its original form?

    Read the article

  • WCF code generation for large/complex schema (HR-XML/OAGIS) - is there an alternative?

    - by Sasha Borodin
    Hello, and thank you for reading. I am implementing a WCF Service based on a predefined specification (HR-XML 3.0). As such, I am starting with the schema, and working my way back to code. There are a number of large Schema documents (which import yet more Schema documents) related to my implementation, provided by this specification. I am able to generate code using xsd.exe, by supplying the "main" and "supporting" xsd files as arguments. But there are several issues, and I am wondering if this is the right approach. there are litterally hundreds of classes - the code file is half a meg in size duplicate classes (ex. Type, Type1 - which both represent the same type) there are classes declared as inheriting from a base class, but that base class is not generated/defined I understand that there are limitations to the types of Schema supported by svcutil.exe/xsd.exe when targeting the DataContractSerializer and even XmlSerializer. My question is two-fold: Are code generation "issues" fairly common when dealing with larger, modular xsd files? Has anyone had success with generating data contracts from OAGIS or HR-XML schema? Given the above issues, are there better approaches to this task, avoiding generating code and working with concrete objects? Does it make better sence to read and compose a SOAP message directly, while still taking advantage of the rest of the WCF framework? I understand that I am loosing the convenience of working with .NET objects, and the framekwork-provided (de)serialization; given these losses, would it still be advantageous to base my Service on WCF? Is there some "middle ground" between working with .NET types and pure XML? Thank you very much! -Sasha Borodin DFWHC.org

    Read the article

  • assembly of pdp-11(simulator)

    - by lego69
    I have this code on pdp-11 tks = 177560 tkb = 177562 tps = 177564 tpb = 177566 lcs = 177546 . = torg + 2000 main: mov #main, sp mov #kb_int, @#60 mov #200, @#62 mov #101, @#tks mov #clock, @#100 mov #300, @#102 mov #100, @#lcs loop: mov @#tks,r2 aslb r2 bmi loop halt clock: tst bufferg beq clk_end mov #msg,-(sp) jsr pc, print_str tst (sp)+ clr bufferg bic #100,@#tks clr @#lcs clk_end:rti kb_int: mov r1,-(sp) jsr pc, read_char movb r1,@buff_ptr inc buff_ptr bis #1,@#tks cmpb r1,#'q bne next_if mov #0, @#tks next_if:cmpb r1,#32. bne end_kb_int clrb @buff_ptr mov #buffer,-(sp) jsr pc, print_str tst (sp)+ mov #buffer,buff_ptr end_kb_int: mov (sp)+,r1 rti ;############################# read_char: tstb @#tks bpl read_char movb @#tkb, r1 rts pc ;############################# print_char: tstb @#tps bpl print_char movb r1, @#tpb rts pc ;############################# print_str: mov r1,-(sp) mov r2,-(sp) mov 6(sp),r2 str_loop: movb (r2)+,r1 beq pr_str_end jsr pc, print_char br str_loop pr_str_end: mov (sp)+,r2 mov (sp)+,r1 rts pc . = torg + 3000 msg:.ascii<Something is wrong!> .byte 0 .even buff_ptr: .word buffer buffer: .blkw 3 bufferg: .word 0 Can somebody please explain how this part is working, thanks in advance movb r1,@buff_ptr inc buff_ptr bis #1,@#tks cmpb r1,#'q bne next_if mov #0, @#tks next_if:cmpb r1,#32. bne end_kb_int clrb @buff_ptr mov #buffer,-(sp) jsr pc, print_str tst (sp)+ mov #buffer,buff_ptr

    Read the article

  • NSOperations or NSThread for bursts of smaller tasks that continuously cancel each other?

    - by RickiG
    Hi I would like to see if I can make a "search as you type" implementation, against a web service, that is optimized enough for it to run on an iPhone. The idea is that the user starts typing a word; "Foo", after each new letter I wait XXX ms. to see if they type another letter, if they don't, I call the web service using the word as a parameter. The web service call and the subsequent parsing of the result I would like to move to a different thread. I have written a simple SearchWebService class, it has only one public method: - (void) searchFor:(NSString*) str; This method tests if a search is already in progress (the user has had a XXX ms. delay in their typing) and subsequently stops that search and starts a new one. When a result is ready a delegate method is called: - (NSArray*) resultsReady; I can't figure out how to get this functionality 'threaded'. If I keep spawning new threads each time a user has a XXX ms. delay in the typing I end up in a bad spot with many threads, especially because I don't need any other search, but the last one. Instead of spawning threads continuously, I have tried keeping one thread running in the background all the time by: - (void) keepRunning { NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; SearchWebService *searchObj = [[SearchWebService alloc] init]; [[NSRunLoop currentRunLoop] run]; //keeps it alive [searchObj release]; [pool release]; } But I can't figure out how to access the "searchFor" method in the "searchObj" object, so the above code works and keeps running. I just can't message the searchObj or retrieve the resultReady objects? Hope someone could point me in the right direction, threading is giving me grief:) Thank you.

    Read the article

  • Hierarchy of meaning

    - by asldkncvas
    I am looking for a method to build a hierarchy of words. Background: I am a "amateur" natural language processing enthusiast and right now one of the problems that I am interested in is determining the hierarchy of word semantics from a group of words. For example, if I have the set which contains a "super" representation of others, i.e. [cat, dog, monkey, animal, bird, ... ] I am interested to use any technique which would allow me to extract the word 'animal' which has the most meaningful and accurate representation of the other words inside this set. Note: they are NOT the same in meaning. cat != dog != monkey != animal BUT cat is a subset of animal and dog is a subset of animal. I know by now a lot of you will be telling me to use wordnet. Well, I will try to but I am actually interested in doing a very domain specific area which WordNet doesn't apply because: 1) Most words are not found in Wordnet 2) All the words are in another language; translation is possible but is to limited effect. another example would be: [ noise reduction, focal length, flash, functionality, .. ] so functionality includes everything in this set. I have also tried crawling wikipedia pages and applying some techniques on td-idf etc but wikipedia pages doesn't really do much either. Can someone possibly enlighten me as to what direction my research should go towards? (I could use anything)

    Read the article

  • Custom Solr sorting

    - by Tom
    Hello everyone, I've been asked to do an evaluation of Solr as an alternative for a commercial search engine. The application now has a very particular way of sorting results using something called "buckets". I'll try to explain with a bit of details: In the interface they have 2 fields: "what" and "where". Both fields are actually sets of fields (what = category, name, contact info... and where= country, state, region, city...) so the copyfield feature of Solr immediately comes to mind. Now based on the field generated the actual match the result should end up in a specific bucket. In particular the first bucket contains all the result documents that have an exact match on the category field, in the second bucket all exact matches on name, the third partial matches on category, the fourth partial matches on name, the fifth matches on contact info etc... Then within each of those first tier buckets all results are placed in second tier buckets depending on what location was matched: city, then region, then province and so on. To even complicate things more there is also a third tier bucket where results are placed according to the value of a ranking field: all documents with the value 1 in the ranking field go in bucket 1 and so on. And finally results should be randomized in the third tier bucket... On top of this they obviously want support for facets and paging. My apologies for the long mail but I would greatly appreciate feedback and/or suggestions. I'm aware that this that this is a very particular problem but everything that points me in the right direction is helpful. Cheers, Tom

    Read the article

  • Jboss 6 Cluster Singleton Clustered

    - by DanC
    I am trying to set up a Jboss 6 in a clustered environment, and use it to host clustered stateful singleton EJBs. So far we succesfully installed a Singleton EJB within the cluster, where different entrypoints to our application (through a website deployed on each node) point to a single environment on which the EJB is hosted (thus mantaining the state of static variables). We achieved this using the following configuration: Bean interface: @Remote public interface IUniverse { ... } Bean implementation: @Clustered @Stateful public class Universe implements IUniverse { private static Vector<String> messages = new Vector<String>(); ... } jboss-beans.xml configuration: <deployment xmlns="urn:jboss:bean-deployer:2.0"> <!-- This bean is an example of a clustered singleton --> <bean name="Universe" class="Universe"> </bean> <bean name="UniverseController" class="org.jboss.ha.singleton.HASingletonController"> <property name="HAPartition"><inject bean="HAPartition"/></property> <property name="target"><inject bean="Universe"/></property> <property name="targetStartMethod">startSingleton</property> <property name="targetStopMethod">stopSingleton</property> </bean> </deployment> The main problem for this implementation is that, after the master node (the one that contains the state of the singleton EJB) shuts down gracefuly, the Singleton's state is lost and reset to default. Please note that everything was constructed following the JBoss 5 Clustering documents, as no JBoss 6 documents were found on this subject. Any information on how to solve this problem or where to find JBoss 6 documention on clustering is appreciated.

    Read the article

< Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >