Search Results

Search found 11409 results on 457 pages for 'large teams'.

Page 394/457 | < Previous Page | 390 391 392 393 394 395 396 397 398 399 400 401  | Next Page >

  • Datastructure choices for highspeed and memory efficient detection of duplicate of strings

    - by Jonathan Holland
    I have a interesting problem that could be solved in a number of ways: I have a function that takes in a string. If this function has never seen this string before, it needs to perform some processing. If the function has seen the string before, it needs to skip processing. After a specified amount of time, the function should accept duplicate strings. This function may be called thousands of time per second, and the string data may be very large. This is a highly abstracted explanation of the real application, just trying to get down to the core concept for the purpose of the question. The function will need to store state in order to detect duplicates. It also will need to store an associated timestamp in order to expire duplicates. It does NOT need to store the strings, a unique hash of the string would be fine, providing there is no false positives due to collisions (Use a perfect hash?), and the hash function was performant enough. The naive implementation would be simply (in C#): Dictionary<String,DateTime> though in the interest of lowering memory footprint and potentially increasing performance I'm evaluating a custom data structures to handle this instead of a basic hashtable. So, given these constraints, what would you use? EDIT, some additional information that might change proposed implementations: 99% of the strings will not be duplicates. Almost all of the duplicates will arrive back to back, or nearly sequentially. In the real world, the function will be called from multiple worker threads, so state management will need to be synchronized.

    Read the article

  • Memory mapping of files and system cache behavior in WinXP

    - by Canopus
    Our application is memory intensive and deals with reading a large number of disk files. The total load can be more than 3 GB. There is a custom memory manager that uses memory mapped files to achieve reading of such a huge data. The files are mapped into the process memory space only when needed and with this the process memory is well under control. But what is observed is, with memory mapping, the system cache keeps on increasing until it occupies the available physical memory. This leads to the slowing down of the entire system. My question is how to prevent system cache from hogging the physical memory? I attempted to remove the file buffering (by using FILE_FLAG_NO_BUFFERING ), but with this, the read operations take considerable amount of time and slows down the application performance. How to achieve the scalability without sacrificing much on performance. What are the common techniques used in such cases? I dont have a good understanding of the WinXP OS caching behavior. Any good links explaining the same would also be helpful.

    Read the article

  • Where can I find a powerful, standards compliant, web-based interactive org chart API?

    - by Adam Soltys
    Hi, I'm looking to build an interactive web-based org chart for a large organization. I somewhat like the interface at ancestry.com where you can hover over people and pan/zoom around and click on different nodes to make them the root. Ideally, I'd like it if people could belong to multiple organizational entities like committees, working groups, etc. In other words the API should support graphs in general, not just trees. I'd like to be able to visually explode each organizational substructure into substituents by clicking on it, with a nice animation of the employees ballooning or spilling out so you can really interactively drill down through the organization. I found http://code.google.com/apis/visualization/documentation/gallery/orgchart.html but it looks a bit rudimentary. I know there are desktop tools like OrgPlus and Visio that can build static charts but I'm really looking for a free, web-based API with open standards-based output like SVG or HTML5 Canvas elements rather than Flash or some proprietary output. Something I can embed into a custom web application and style myself. Something interactive.

    Read the article

  • Stable/repeatable random sort (MySQL, Rails)

    - by Matt Rogish
    I'd like to paginate through a randomly sorted list of ActiveRecord models (rows from MySQL database). However, this randomization needs to persist on a per-session basis, so that other people that visit the website also receive a random, paginate-able list of records. Let's say there are enough entities (tens of thousands) that storing the randomly sorted ID values in either the session or a cookie is too large, so I must temporarily persist it in some other way (MySQL, file, etc.). Initially I thought I could create a function based on the session ID and the page ID (returning the object IDs for that page) however since the object ID values in MySQL are not sequential (there are gaps), that seemed to fall apart as I was poking at it. The nice thing is that it would require no/minimal storage but the downsides are that it is likely pretty complex to implement and probably CPU intensive. My feeling is I should create an intersection table, something like: random_sorts( sort_id, created_at, user_id NULL if guest) random_sort_items( sort_id, item_id, position ) And then simply store the 'sort_id' in the session. Then, I can paginate the random_sorts WHERE sort_id = n ORDER BY position LIMIT... as usual. Of course, I'd have to put some sort of a reaper in there to remove them after some period of inactivity (based on random_sorts.created_at). Unfortunately, I'd have to invalidate the sort as new objects were created (and/or old objects being removed, although deletion is very rare). And, as load increases the size/performance of this table (even properly indexed) drops. It seems like this ought to be a solved problem but I can't find any rails plugins that do this... Any ideas? Thanks!!

    Read the article

  • Insert only adds upto 1000 records and ignoresall records after that.

    - by user560559
    i have a large database where the client stores personal messages and fire email notifications [if allowed by the users]. certain users have the option of sending messages to their entire network of friends. some users have over 5000 friends in their network so if they select the whole network they'll be sending messages to over 5000 friends and system will store all the messages into a table. the problem is this that it does not insert more than 1000 records and ignores all inserts after the first 1000. i have increased the packet size, bulk_insert_buffer_size but still no luck. since the system stores some of the info in another table for reports, every insert returns its new message id. due to this i can not use the "insert into table (column1,column2) values (value1,value2) , (value1,value2)....etc." table engine is innodb, mysql version is 5.1.3 and is hosted on amazon web services. all i want is to fix this issue of inserting more than 1000 records at a time. as mentioned earlier, it works fine but only up to 1000 records and simply ignores all the records after that. i'm using php foreach(){} to insert message for each friend and if email is available, send notification to the user. this foreach(){} also inserts the same record in another table [with only 3 columns] for generating reports.

    Read the article

  • How to remove empty tables from a MySQL backup file.

    - by user280708
    I have multiple large MySQL backup files all from different DBs and having different schemas. I want to load the backups into our EDW but I don't want to load the empty tables. Right now I'm cutting out the empty tables using AWK on the backup files, but I'm wondering if there's a better way to do this. If anyone is interested, this is my AWK script: EDIT: I noticed today that this script has some problems, please beware if you want to actually try to use it. Your output may be WRONG... I will post my changes as I make them. # File: remove_empty_tables.awk # Copyright (c) Northwestern University, 2010 # http://edw.northwestern.edu /^--$/ { i = 0; line[++i] = $0; getline if ($0 ~ /-- Definition/) { inserts = 0; while ($0 !~ / ALTER TABLE .* ENABLE KEYS /) { # If we already have an insert: if (inserts > 0) print else { # If we found an INSERT statement, the table is NOT empty: if ($0 ~ /^INSERT /) { ++inserts # Dump the lines before the INSERT and then the INSERT: for (j = 1; j <= i; ++j) print line[j] i = 0 print $0 } # Otherwise we may yet find an insert, so save the line: else line[++i] = $0 } getline # go to the next line } line[++i] = $0; getline line[++i] = $0; getline if (inserts > 0) { for (j = 1; j <= i; ++j) print line[j] print $0 } next } else { print "--" } } { print }

    Read the article

  • ASP.NET problem - Firebug shows odd behaviour

    - by Brandi
    I have an ASP.NET application that does a large database read. It loads up a gridview inside an update panel. In VS2008, just running on my local machine, it runs fantastically. In production (identical code, just published and put on one of our network servers), it runs slow as dirt. Debug is set to false, so this is not the cause of the slow down. I'm not an experienced web developer, so besides that, feel free to suggest the obvious. I have been using Firebug to determine what's going on, and here is what that has turned up: On production, there are around 500 requests. The timeline bar is very short. The size column varies from run to run, but is always the same for the duration of the run. Locally, there are about 30 requests. The timeline bar takes up the entire space. Can anyone shed some light on why this is happening and what I can do to fix it? Also, I can't find much of anything on the web about this, so any references are helpful too.

    Read the article

  • Provider not notified from cookbook_file

    - by wittyhandle
    I'm working on an ssl provider using Vagrant (1.0.5) and chef-solo (10.12.0) I have my provider, called ssl within a cookbook called gtm_cq, I define it as such in my cookbook's default recipe: gtm_cq_ssl "author" do # attributes will come later end I then have my cookbook_file like below that should notify my ssl provider's import action once it pushes the cert up to the server: cookbook_file "#{node[:cq][:ssl][:author_cert_location]}/foo.cer" do source "foo.cer" owner "crx" group "root" mode "0644" notifies :import, resources(:gtm_cq_ssl => "author") end When I run this, the foo.cer gets pushed up as expected, but the import action of my ssl provider is never called. The most I see of any reference is these couple of lines in the log (removed log headers): .. cookbook_file[/opt/cq5/author/foo.cer] sending import action to gtm_cq_ssl[author] (delayed) .. Processing gtm_cq_ssl[author] action import (gtm_cq::author line 34) There's a large very obvious log statement as well as the use of another cookbook_file for a test file to push something up to the server. No log statement, no test file pushed. I'm certain too that the foo.cer file is removed from the server before each test. I found that if I edit my notifies line like so with :immediately notifies :import, resources(:gtm_cq_ssl => "author"), :immediately It seems to work. And I suppose this is ok in my particular case, but it would seem something is not right if that's the only way I can call my provider. Any help on this would be greatly appreciated. Thanks!

    Read the article

  • Reduce file size for charts pasted from excel into word

    - by Steve Clanton
    I have been creating reports by copying some charts and data from an excel document into a word document. I am pasting into a content control, so i use ChartObject.CopyPicture in excel and ContentControl.Range.Paste in word. This is done in a loop: Set ws = ThisWorkbook.Worksheets("Charts") With ws For Each cc In wordDocument.ContentControls If cc.Range.InlineShapes.Count > 0 Then scaleHeight = cc.Range.InlineShapes(1).scaleHeight scaleWidth = cc.Range.InlineShapes(1).scaleWidth cc.Range.InlineShapes(1).Delete .ChartObjects(cc.Tag).CopyPicture Appearance:=xlScreen, Format:=xlPicture cc.Range.Paste cc.Range.InlineShapes(1).scaleHeight = scaleHeight cc.Range.InlineShapes(1).scaleWidth = scaleWidth ElseIf ... Next cc End With Creating these reports using Office 2007 yielded files that were around 6MB, but creating them (using the same worksheet and document) in Office 2010 yields a file that is around 10 times as large. After unzipping the docx, I found that the extra size comes from emf files that correspond to charts that are pasted in using VBA. Where they range from 360 to 900 KB before, they are 5-18 MB. And the graphics are not visibly better. I am able to CopyPicture with the format xlBitmap, and while that is somewhat smaller, it is larger than the emf generated by Office 2007 and noticeably poorer quality. Are there any other options for reducing the file size? Ideally, I would like to produce a file with the same resolution for the charts as I did using Office 2007. Is there any way that uses VBA only (without modifying the charts in the spreadsheet)?

    Read the article

  • Deleting list items via ProcessBatchData()

    - by q-tuyen
    You can build a batch string to delete all of the items from a SharePoint list like this: 1: //create new StringBuilder 2: StringBuilder batchString= new StringBuilder(); 3: 4: //add the main text to the stringbuilder 5: batchString.Append(""); 6: 7: //add each item to the batch string and give it a command Delete 8: foreach (SPListItem item in itemCollection) 9: { 10: //create a new method section 11: batchString.Append(""); 12: //insert the listid to know where to delete from 13: batchString.Append("" + Convert.ToString(item.ParentList.ID) + ""); 14: //the item to delete 15: batchString.Append("" + Convert.ToString(item.ID) + ""); 16: //set the action you would like to preform 17: batchString.Append("Delete"); 18: //close the method section 19: batchString.Append(""); 20: } 21: 22: //close the batch section 23: batchString.Append(""); 24: 25: //preform the batch 26: SPContext.Current.Web.ProcessBatchData(batchString.ToString()); The only disadvantage that I can think of right know is that all the items you delete will be put in the recycle bin. How can I prevent that? I also found the solution like below: // web is a the SPWeb object your lists belong to // Before starting your deletion web.Site.WebApplication.RecycleBinEnabled = false; // When your are finished re-enable it web.Site.WebApplication.RecycleBinEnabled = true; Ref [Here](http://www.entwicklungsgedanken.de/2008/04/02/how-to-speed-up-the-deletion-of-large-amounts-of-list-items-within-sharepoint/) But the disadvantage of that solution is that only future deletion will not be sent to the Recycle Bins but it will delete all existing items as well which user do not want. Any idea to prevent not to delete existing items? Many thanks in advance, TQT

    Read the article

  • CURL request incomplete, suspect timeout but not sure.

    - by girlygeek
    I am currently using CURL via a php script running as daily cron to export product data in csv format from a site's admin area. The normal way of exporting data will be to go to the Export page in a browser, and set the configuration, then click on "export data" button. But as the number of products I am exporting is very large, and it takes more than 5-10 mins to export the data, I've decided to use php's curl function to mimic this on a daily basis via cron. Previously, it is working fine, but recently as I increased the number of products in the store by 500+, the script fails to return the exported data. Testing it manually via clicking on the "export" button in a browser, does return the data correctly. Thus there is no "timeout" issue with running the export in a browser manually. I've tested and by removing/decreasing the number of products (thus the time needed), the php-curl script works fine again when run from cron. So I suspect that it has something to do with timeouts issue, specifically with the curl function in php. I've set both CURLOPT_TIMEOUT and CURLOPT_CONNECTTIMEOUT to '0' respectively to try. In the php-curl script, I've also set "set_time_limit(3000)". But still it does not work, and the request will timeout, with the script failing to return with a complete set of csv data. Any help in helping me resolve/understand this issue will be much appreciated!

    Read the article

  • Rails : can't write unknown attribute `url'

    - by user2954789
    I am new to ruby on rails,and I am learning by creating a blog. I am not able to save to my blogs table and I get this error "can't write unknown attribute url" Blogs migration :db/migrate/ class CreateBlogs < ActiveRecord::Migration --def change --- create_table :blogs do |t| ---- t.string :title ---- t.text :description ---- t.string :slug ---- t.timestamps --- end --end end Blogs Model :/app/models/blogs.rb class Blogs < ActiveRecord::Base --acts_as_url :title --def to_param ---url --end --validates :title, presence:true end Blogs Controller : /app/controllers/blogs_controller.rb class BlogsController < ApplicationController before_action :require_login --def new --- @blogs = Blogs.new --end --def show ---@blogs = Blogs.find(params[:id]) --end --def create ---@blogs = Blogs.new(blogs_params) --if @blogs.save ---flash[:success] = "Your Blog has been created." ---redirect_to home_path --else ---render 'new' --end -end --def blogs_params ---params.require(:blogs).permit(:title,:description) --end private --def require_login ---unless signed_in? ----flash[:error] = "You must be logged in to create a new Blog" ----redirect_to signin_path ---end --end end Blogs Form:/app/views/blogs/new.html.erb Blockquote <%= form_for @blogs, url: blogs_path do |f| %><br/> <%= render 'shared/error_messages_blogs' %><br/> <%= f.label :title %><br/> <%= f.text_field :title %><br/> <%= f.label :description %><br/> <%= f.text_area :description %><br/> <%= f.submit "Submit Blog", class: "btn btn-large btn-primary" %><br/> <% end %><br/> and I have also added "resources :blogs" to my routes.rb file. I get this error in controller at if @blogs.save

    Read the article

  • How to lazy process an xml documentwith hexpat?

    - by Florian
    In my search for a haskell library that can process large (300-1000mb) xml files i came across hexpat. There is an example in the Haskell Wiki that claims to -- Process document before handling error, so we get lazy processing. For testing purposes i have redirected the output to /dev/null and throw a 300mb file at it. Memory consumption kept rising until i had to kill the process. Now i removed the error handling from the process function: process :: String -> IO () process filename = do inputText <- L.readFile filename let (xml, mErr) = parse defaultParseOptions inputText :: (UNode String, Maybe XMLParseError) hFile <- openFile "/dev/null" WriteMode L.hPutStr hFile $ format xml hClose hFile return () As a result the function now uses constant memory. Why does the error handling result in massive memory consumption? As far as i understand xml and mErr are two seperate unevaluated thunks after the call to parse. Does format xml evaluate xml and build the evaluation tree of 'mErr'? If yes is there a way to handle the error while using constant memory? http://www.haskell.org/haskellwiki/Hexpat/

    Read the article

  • VS2008 Link Error Using SafeInt3.hpp in 64bit mode.

    - by photo_tom
    I have the below code that links and runs fine in 32bit mode - #include "safeint3.hpp" typedef SafeInt<SIZE_T> SAFE_SIZE_T; SAFE_SIZE_T sizeOfCache; SAFE_SIZE_T _allocateAmt; Where safeint3.hpp is current version that can be found on Codeplex SafeInt. For those who are unaware of it, safeint is a template class that makes working with different integer types and sizes "safe". To quote channel 9 video on software - "it writes the code that you should". Which is my case. I have a class that is managing a large in-memory cache of objects (6gb) and I am very concerned about making sure that I don't have overflow/underflow issues on my pointers/sizes/other integer variables. In this use, it solves many problems. My problem is coming when moving from 32bit dev mode to 64bit production mode. When I build the app in this mode, I'm getting the following linker warnings - 1>cachecontrol.obj : warning LNK4006: "bool __cdecl IntrinsicMultiplyUint64(unsigned __int64 const &,unsigned __int64 const &,unsigned __int64 *)" (?IntrinsicMultiplyUint64@@YA_NAEB_K0PEA_K@Z) already defined in ImageInRamCache.obj; second definition ignored 1>cachecontrol.obj : warning LNK4006: "bool __cdecl IntrinsicMultiplyInt64(__int64 const &,__int64 const &,__int64 *)" (?IntrinsicMultiplyInt64@@YA_NAEB_J0PEA_J@Z) already defined in ImageInRamCache.obj; second definition ignored While I understand I can ignore the error, I would like either (a) prevent the warning from occurring or (b) make it disappear so that my QA department doesn't flag it as a problem. And after spending some time researching it, I cannot find a way to do either.

    Read the article

  • Populate on demand a TreeView with datas in an XML

    - by m6a-uds
    Hi I have a large XML file (3000+ nodes) that I want to represent in a TreeView on ASP.NET. I cannot databind it to a XMLDataSource because loading the TreeView will then be way too slow (I never even waited long enough to see it finish...) So the solution for this would be to use the PopulateOnDemand property of the TreeNodes to load data only when needed. Problem is, I can't think of a way to acheive this... How can-I, based on the ID of a node, search a XMLDocument to get all the childnodes of the node having this ID? XML would look like that: <document ID=1> <document ID=2> <document ID=3> </document> </document> <document ID=4> </document> </document> There are nor rules on how much levels it can go down or anything...

    Read the article

  • Can I split a single SQL 2008 DB Table into multiple filegroups, based on a discriminator column?

    - by Pure.Krome
    Hi folks, I've got a SQL Server 2008 R2 database which has a number of tables. Two of these tables contains a lot of large data .. mainly because one of them is VARBINARY(MAX) and the sister table is GEOGRAPHY. (Why two tables? Read Below if you're interested***) The data in these tables are geospatial shapes, such as zipcode boundaries. Now, the first 70K odd rows are for DataType = 1 the rest 5mil rows are for DataType = 2 Now, is it possible to split the table data into two files? so all rows that are for DataType != 2 goes into File_A and DataType = 2 goes into File_B? This way, when I backup the DB, I can skip adding File_B so my download is waaaaay smaller? Is this possible? I guessing you might be thinking - why not keep them as TWO extra tables? Mainly because in the code, the data is conceptually the same .. it's just happens that I want to split the storage of this model data. It really messes up my model if I now how two aggregates in my model, instead of one. ***Entity Framework doesn't like Tables with GEOGRAPHY, so i have to create a new table which transforms the GEOGRAPHY to VARBINARY, and then drop that into EF.

    Read the article

  • What's the "correct way" to organize this project?

    - by user571747
    I'm working on a project that allows multiple users to submit large data files and perform operations on them. The "backend" which performs these operations is written in Perl while the "frontend" uses PHP to load HTML template files and determines which content to deliver. Data is stored in a database (MySQL, SQLite, Oracle) and while there is data which has not yet been acted upon, Perl adds it to a running queue which delivers data to other threads based on system load. In addition, there may be pre- and post-processing of the data before and after the main Perl script operates (the specifications are unclear) so I may want to allow these processors to be user-selectable plugins. I had been writing this project in a more procedural fashion but I am quickly realizing the benefit of separating concerns as to limit the scope one change has on the rest of the project. I'm quite unexperienced with design patterns and am curious what the best way to proceed is. I've heard MVC thrown around quite a bit but I am unsure of how to apply it. Specifically, what are some good options to structure this code (in terms of design patterns and folder hierarchy)? How can I achieve this with both PHP and Perl while minimizing duplicated code between languages? Should I keep my PHP files in the top level so I don't have ugly paths in the URL? Also, if I want to provide interchangeable databases, does each table need its own DAO implementation?

    Read the article

  • Best way of showing more results with javascript/css

    - by Ricardo Neves
    I'm developing a website and i'm having troubles showing the search results to the user the way I want. Basically, after the user search, the page makes a couple of ajax requests and as soon as a response arrive it appends the info to a specific element on my page. Each results is shown as a line... The problem is that in most case there are going to be more than 1000 results and this would make the page have a large scroll. My idea was to show only the first 15 results and when the user clicks "show more" the element would expand and show the next 15 results and so on... This would be easier to do if the website wasn't responsive, but because it is I can't find the proper way of implementing what I want without lowering the website perfomance. I have "2 ideas": The first is by using something like #element .div:nth-child(-n+15) on my css and figure a way of changing the "15" to how much results I want to show... I don't know if this can be done. Is it possible to call css rules with parameters? Maybe with less css? The second option is probably a bad option if i don't want to lower the website performance. Using javascript I would check if there is a specific css class(like .show-15 .show30 .show45) and add that class to my element and if it don't exist, create it somehow.. Any help would be appreciated.

    Read the article

  • Very fast document similarity

    - by peyton
    Hello, I am trying to determine document similarity between a single document and each of a large number of documents (n ~= 1 million) as quickly as possible. More specifically, the documents I'm comparing are e-mails; they are grouped (i.e., there are folders or tags) and I'd like to determine which group is most appropriate for a new e-mail. Fast performance is critical. My a priori assumption is that the cosine similarity between term vectors is appropriate for this application; please comment on whether this is a good measure to use or not! I have already taken into account the following possibilities for speeding up performance: Pre-normalize all the term vectors Calculate a term vector for each group (n ~= 10,000) rather than each e-mail (n ~= 1,000,000); this would probably be acceptable for my application, but if you can think of a reason not to do it, let me know! I have a few questions: If a new e-mail has a new term never before seen in any of the previous e-mails, does that mean I need to re-compute all of my term vectors? This seems expensive. Is there some clever way to only consider vectors which are likely to be close to the query document? Is there some way to be more frugal about the amount of memory I'm using for all these vectors? Thanks!

    Read the article

  • multiple models in Rails with a shared interface

    - by dfondente
    I'm not sure of the best structure for a particular situation in Rails. We have several types of workshops. The administration of the workshops is the same regardless of workshop type, so the data for the workshops is in a single model. We collect feedback from participants about the workshops, and the questionnaire is different for each type of workshop. I want to access the feedback about the workshop from the workshop model, but the class of the associated model will depend on the type of workshop. If I was doing this in something other than Rails, I would set up an abstract class for WorkshopFeedback, and then have subclasses for each type of workshop: WorkshopFeedbackOne, WorkshopFeedbackTwo, WorkshopFeedbackThree. I'm unsure how to best handle this with Rails. I currently have: class Workshop < ActiveRecord::Base has_many :workshop_feedbacks end class Feedback < ActiveRecord::Base belongs_to :workshop has_many :feedback_ones has_many :feedback_twos has_many :feedback_threes end class FeedbackOne < ActiveRecord::Base belongs_to :feedback end class FeedbackTwo < ActiveRecord::Base belongs_to :feedback end class FeedbackThree < ActiveRecord::Base belongs_to :feedback end This doesn't seem like to the cleanest way to access the feedback from the workshop model, as accessing the correct feedback will require logic investigating the Workshop type and then choosing, for instance, @workshop.feedback.feedback_one. Is there a better way to handle this situation? Would it be better to use a polymorphic association for feedback? Or maybe using a Module or Mixin for the shared Feedback interface? Note: I am avoiding using Single Table Inheritance here because the FeedbackOne, FeedbackTwo, FeedbackThree models do not share much common data, so I would end up with a large sparsely populated table with STI.

    Read the article

  • Intelligent search and generation of Java code, preferrably using Python?

    - by Ipsquiggle
    Basically, I do lots of one-off code generation, large-scale refactorings, etc. etc. in Java. My tool language of choice is Python, but I'll take whatever solutions you can offer. Here is a simplified illustration of what I would like, in a pseudocode Generating an implementation for an interface search within my project: for each Interface as iName: write class(name=iName+"Impl", implements=iName) search within the body of iName: for each Method as mName: write method(name=mName, body="// TODO implement this...") Basically, the tool I'm searching for would allow me to: parse files according to their Java structure ("search for interfaces") search for words contextualized by language elements and types ("variables of type SomeClass", "doStuff() method calls on SomeClass instances") to run searches with structural context ("within the body of the current result") easily replace or generate code (with helpers to generate, as above, or functions for replacing, "rename the interface to Foo", "insert the line Blah.Blah()", etc.) The point is, I don't want to spend a lot of time writing these things, as they are usually throwaway. But sometimes I need something just a little smarter than what grep offers. It wouldn't be too hard to write up a simplistic version of this, but if I'm going to use something like this at all, I'd expect it to be robust. Any suggestions of a tool/library that will help me accomplish this?

    Read the article

  • Gtk: How can I get a part of a file in a textview with scrollbars relating to the full file

    - by badgerman1
    I'm trying to make a very large file editor (where the editor only stores a part of the buffer in memory at a time), but I'm stuck while building my textview object. Basically- I know that I have to be able to update the text view buffer dynamically, and I don't know hot to get the scrollbars to relate to the full file while the textview contains only a small buffer of the file. I've played with Gtk.Adjustment on a Gtk.ScrolledWindow and ScrollBars, but though I can extend the range of the scrollbars, they still apply to the range of the buffer and not the filesize (which I try to set via Gtk.Adjustment parameters) when I load into textview. I need to have a widget that "knows" that it is looking at a part of a file, and can load/unload buffers as necessary to view different parts of the file. So far, I believe I'll respond to the "change_view" to calculate when I'm off, or about to be off the current buffer and need to load the next, but I don't know how to get the scrollbars to have the top relate to the beginning of the file, and the bottom relate to the end of the file, rather than to the loaded buffer in textview. Any help would be greatly appreciated, thanks!

    Read the article

  • How can I keep the the logic to translate a ViewModel's values to a Where clause to apply to a linq query out of My Controller?

    - by Mr. Manager
    This same problem keeps cropping up. I have a viewModel that doesn't have any persistent backing. It is just a ViewModel to generate a search input form. I want to build a large where clause from the values the user entered. If the Action Accepts as a parameter SearchViewModel How do I do this without passing my viewModel to my service layer? Service shouldn't know about ViewModels right? Oh and if I serialize it, then it would be a big string and the key/values would be strongly typed. SearchViewModel this is just a snippet. [Display(Name="Address")] public string AddressKeywords { get; set; } /// <summary> /// Gets or sets the census. /// </summary> public string Census { get; set; } /// <summary> /// Gets or sets the lot block sub. /// </summary> public string LotBlockSub { get; set; } /// <summary> /// Gets or sets the owner keywords. /// </summary> [Display(Name="Owner")] public string OwnerKeywords { get; set; } In my controller action I was thinking of something like this. but I would think all this logic doesn't belong in my Controller. ActionResult GetSearchResults(SearchViewModel model){ var query = service.GetAllParcels(); if(model.Census != null){ query = query.Where(x=>x.Census == model.Census); } if (model.OwnerKeywords != null){ query = query.Where(x=>x.Owners == model.OwnerKeywords); } return View(query.ToList()); }

    Read the article

  • MySQL PHP | "SELECT FROM table" using "alphanumeric"-UUID. Speed vs. Indexed Integer / Indexed Char

    - by dropson
    At the moment, I select rows from 'table01' using: SELECT * FROM table01 WHERE UUID = 'whatever'; The UUID column is a unique index. I know this isn't the fastest way to select data from the database, but the UUID is the only row-identifier that is available to the front-end. Since I have to select by UUID, and not ID, I need to know what of these two options I should go for, if say the table consists of 100'000 rows. What speed differences would I look at, and would the index for the UUID grow to large, and lag the DB? Get the ID before doing the "big" select 1. $id = "SELECT ID FROM table01 WHERE UUID = '{alphanumeric character}'"; 2. $rows = SELECT * FROM table01 WHERE ID = $id; Or keep it the way it is now, using the UUID. 1. SELECT FROM table01 WHERE UUID '{alphanumeric character}'; Side note: All new rows are created by checking if the system generated uniqueid exists before trying to insert a new row. Keeping the column always unique. The "example" table. CREATE TABLE Table01 ( ID int NOT NULL PRIMARY KEY, UUID char(15), name varchar(100), url varchar(255), `date` datetime ) ENGINE = InnoDB; CREATE UNIQUE INDEX UUID ON Table01 (UUID);

    Read the article

  • How can I set the tab on a webpage depending on which page you come from in ASP.Net MVC?

    - by uriDium
    I have a rather large entity. It has a parent relationship with many different child tables. Each of the child tables I have represented as a tab (luckily it makes sense and looks nice and makes things easier to navigate). If the users wants to add a new child row, they go the particular tab, and there they see a list of rows which is owned by the parent and they can do the usual CRUD. The CRUD takes them to a new controller and action and passes the ID of the parent to the action. (This is as far as I can tell what I was meant to do, any other ideas??) When they have finsihed they click save and it takes them back to the original page BUT I want it to automatically go to the right tab. How can I do this? I am using Jquery UI tabs, ASP.Net MVC 2.0. One idea I had was to just go back to the bookmark (the href part with the #, for e.g. /Parent/Details/4/#tabname). Apparently JQuery UI tabs can handle this. Or to set the tab name as part of the query string (/Parent/Details/4?tab=name) What is the best practise here?

    Read the article

< Previous Page | 390 391 392 393 394 395 396 397 398 399 400 401  | Next Page >