Search Results

Search found 9017 results on 361 pages for 'efficient storage'.

Page 269/361 | < Previous Page | 265 266 267 268 269 270 271 272 273 274 275 276  | Next Page >

  • Consolidating separate Loan, Purchase & Sales tables into one transaction table.

    - by Frank Computer
    INFORMIX-SE with ISQL 7.3: I have separate tables for Loan, Purchase & Sales transactions. Each tables rows are joined to their respective customer rows by: customer.id [serial] = loan.foreign_id [integer]; = purchase.foreign_id [integer]; = sale.foreign_id [integer]; I would like to consolidate the three tables into one table called "transaction", where a column "transaction.trx_type" [char(1)] {L=Loan, P=Purchase, S=Sale} identifies the transaction type. Is this a good idea or is it better to keep them in separate tables? Storage space is not a concern, I think it would be easier programming & user=wise to have all types of transactions under one table.

    Read the article

  • How to correctly migrate urls from custom asp.net solution to Wordpress?

    - by Marek
    I have a web site built using asp.net with ugly URLs like /DisplayContent.aspx?id=789564. I know how to migrate the database, but the Wordpress urls will be (naturally) different. Can I simply write some mapping or do I have to include a rewrite rule for each subpage (300 pages) in .htaccess? Should I provide a rewrite rule for each existing page that would transform a full old url to the known new url, like for example: /DisplayContent.aspx?id=789798 -> /2010-5-10/Title-Of-The-Post Even if I manage to migrate the URLs, the structure of the HTML for the new content will naturally be different. How does this affect SEO? Should I run asp.net and wordpress side by side and issue the redirects from the asp.net application? What is the most efficient solution to this kind of migration of URLs without doing PHP programming?

    Read the article

  • Accessing two sides of a user-user relationship in rails

    - by Lowgain
    Basically, I have a users model in my rails app, and a fanship model, to facilitate the ability for users to become 'fans' of each other. In my user model, I have: has_many :fanships has_many :fanofs, :through => :fanships In my fanship model, I have: belongs_to :user belongs_to :fanof, :class_name => "User", :foreign_key => "fanof_id" My fanship table basically consists of :id, :user_id and :fanof_id. This all works fine, and I can see what users a specific user is a fan of like: <% @user.fanofs.each do |fan| %> #things <% end %> My question is, how can I get a list of the users that are a fan of this specific user? I'd like it if I could just have something like @user.fans, but if that isn't possible what is the most efficient way of going about this? Thanks!

    Read the article

  • What's the advantage of an Adobe AIR app over a traditional desktop app?

    - by John
    I'm pretty familiar with using Adobe Flex & AS3, and compared with writing apps in JS/HTML I think it's very cool. However, since AIR is essentially a non-browser version of Flex with benefits like local storage, it seems to be competing as a cross-platform desktop application platform... and in that space it's much less mature than more established desktop technologies. So what's the advantage of creating a desktop application using AIR compared to something like Java (or C++ using a cross-platform GUI library like wxWidgets)? Java's equally capable of communicating with the server for instance, I'm not quite sure what AIR adds when competing head-to-head in the desktop development world?

    Read the article

  • enumerate all combinations in c++

    - by BCS
    My question is similar to this combinations question but in my case I have N (N 4) small sets (1-2 items per set for now might go to 3 maybe 4) and want to generate each combination of one item from each set. The current solution looks somethinging along the lines of this for(T:: iterator a = setA.begin(); a != setA.end(); ++a) for(T:: iterator b = setB.begin(); b != setB.end(); ++b) for(T:: iterator c = setC.begin(); c != setC.end(); ++c) for(T:: iterator d = setD.begin(); d != setD.end(); ++d) for(T:: iterator e = setE.begin(); e != setE.end(); ++e) something(*a,*b,*c,*d,*e); Simple, effective, probably reasonably efficient, but ugly and not very extensible. Does anyone know of a better/cleaner way to do this?

    Read the article

  • Awk or Sed: File Annotation

    - by lukmac
    Hallo, my SO friend, my question is: Specification: annotate the fields of FILE_2 to the corresponding position of FILE_1. A field is marked, and hence identified, by a delimiter pair. I did this job in python before I knew awk and sed, with a couple hundred lines of code. Now I want to see how powerful and efficient awk and sed can be. Show me some masterpiece of awk or sed, please! The delimiter pairs can be configured in FILE_3, but let's assume the first delimiter in a pair is 'Marker (number i) start', the other one is 'Marker (number i) done' Example: |-----------------FILE_1------------------| text text text text blabla Marker_1_start Marker_1_done any text in between blabla Marker_2_start Marker_2_done text text |-----------------FILE_2------------------| Marker_1_start 11 1111 Marker_1_done Marker_2_start 2222 22 Marker_2_done Expected Output: |-----------------FILE_Out------------------| text text text text blabla Marker_1_start 11 1111 Marker_1_done any text in between blabla Marker_2_start 2222 22 Marker_2_done text text

    Read the article

  • group by country with ActiveRecords in Rails

    - by Adnan
    Hello, I have a table with users: name | country | .. | UK | .. | US | .. | US | .. | UK | .. | FR | .. | FR | .. | UK | .. | UK | .. | DE | .. | DE | .. | UK | .. | CA | . . What is the most efficient way with ActiveRecords to get the list of countries in my view and for each country how many users are from, so: US 123 UK 54 DE 33 . . .

    Read the article

  • Background audio not working in windows 8 store / metro app

    - by roryok
    I've tried setting background audio through both a mediaElement in XAML <MediaElement x:Name="MyAudio" Source="Assets/Sound.mp3" AudioCategory="BackgroundCapableMedia" AutoPlay="False" /> And programmatically async void setUpAudio() { var package = Windows.ApplicationModel.Package.Current; var installedLocation = package.InstalledLocation; var storageFile = await installedLocation.GetFileAsync("Assets\\Sound.mp3"); if (storageFile != null) { var stream = await storageFile.OpenAsync(Windows.Storage.FileAccessMode.Read); _soundEffect = new MediaElement(); _soundEffect.AudioCategory = AudioCategory.BackgroundCapableMedia; _soundEffect.AutoPlay = false; _soundEffect.SetSource(stream, storageFile.ContentType); } } // and later... _soundEffect.Play(); But neither works for me. As soon as I minimise the app the music fades out

    Read the article

  • Database Optimization techniques for amateurs.

    - by Zombies
    Can we get a list of basic optimization techniques going (anything from modeling to querying, creating indexes, views to query optimization). It would be nice to have a list of these, one technique per answer. As a hobbyist I would find this to be very useful, thanks. And for the sake of not being too vague, let's say we are using a maintstream DB such as MySQL or Oracle, and that the DB will contain 500,000-1m or so records across ~10 tables, some with foreign key contraints, all using the most typical storage engines (eg: InnoDB for MySQL). And of course, the basics such as PKs are defined as well as FK contraints.

    Read the article

  • Correct way to take absolute value of INT_MIN

    - by aka.nice
    I want to perform some arithmetic in unsigned, and need to take absolute value of negative int, something like do_some_arithmetic_in_unsigned_mode(int some_signed_value) { unsigned int magnitude; int negative; if(some_signed_value<0) { magnitude = 0 - some_signed_value; negative = 1; } else { magnitude = some_signed_value; negative = 0; } ...snip... } But INT_MIN might be problematic, 0 - INT_MIN is UB if performed in signed arithmetic. What is a standard/robust/safe/efficient way to do this in C?

    Read the article

  • Paperclip generating wrong URLs in Heroku

    - by Tony
    Paperclip is generating wrong URLs in Heroku. I have an Audio model which has a mp3 field as follows: class Audio < ActiveRecord::Base has_attached_file :mp3, :storage => :s3, :s3_credentials => S3_CREDENTIALS, :bucket => S3_CREDENTIALS[:bucket], :path => ":rails_root/public/system/:attachment/:id/:style/:filename", :url => "/system/:attachment/:id/:style/:filename" I am calling audio.mp3.url from a controller, and it returns http://s3.amazonaws.com/MyApp/audios/mp3s//original/96a9ae89302fdf8462ee05eb829f2e17578b144e20120908-2-11f61zr.mp3?1347135050 instead of http://s3.amazonaws.com/MyApp/audios/mp3s/000/000/004/original/96a9ae89302fdf8462ee05eb829f2e17578b144e20120908-2-11f61zr.mp3?1347135050 (which works) Why is it missing the '000/000/004' part of the route? The same model is generating the right URL when used in a view. Any help? I am using paperclip 3.2.0 and Rails 3.1.8. Any help?

    Read the article

  • Updating a database periodically with Java

    - by MSR
    I would like to perform updates to a MySQL database using two separate classes (that do different things) -- one doing so every 10 seconds, and the second every minute. I have a few gaps in my Java knowledge and I'm wondering what the best way to achieve it is. Importantly, if connectivity to the database is lost, I need reconnection attempts to occur indefinitely, and I'm guessing the use of Prepared Statements will make queries more efficient. Should the connection to the database be left open all the time or closed between the updates being run? Maybe I also need to think about clearing objects/resources out of memory if the class instances are going to be run indefinitely.

    Read the article

  • How to implement a set ?

    - by nomemory
    I want to implement a Set in C. Is it OK to use a linked list, when creating the SET, or should I use another approach ? How do you usually implement your own set (if needed). NOTE: If I use the Linked List approach, I will probably have the following complexities for my operations: init : O(1); destroy: O(n); insert: O(n); remove: O(n); union: O(n*m); intersection: O(n*m); difference: O(n*m); ismember: O(n); issubset: O(n*m); setisequal: O(n*m); O(n*m) seems may be a little to big especially for huge data... Is there a way to implement my Set more efficient ?

    Read the article

  • Best approach to synchronising properties across threads

    - by user290796
    Hi, I'm looking for some advice on the best approach to synchronising access to properties of an object in C++. The application has an internal cache of objects which have 10 properties. These objects are to be requested in sets which can then have their properties modified and be re-saved. They can be accessed by 2-4 threads at any given time but access is not intense so my options are: Lock the property accessors for each object using a critical section. This means lots of critical sections - one for each object. Return copies of the objects when requested and have an update function which locks a single critical section to update the object properties when appropriate. I think option 2 seems the most efficient but I just want to see if I'm missing a hidden 3rd option which would be more appropriate. Thanks, J

    Read the article

  • Retrieving a single Guid in CRM 4.0

    - by user1746560
    I'm new to CRM (version 4.0) and i'm trying to return a 'yearid' guide based on a given year (which is also stored in the entity).So far i've got: public static Guid GetYearID(string yearName) { ICrmService service = CrmServiceFactory.GetCrmService(); // Create the query object. QueryExpression query = new QueryExpression("year"); ColumnSet columns = new ColumnSet(); columns.AddColumn("yearid"); query.ColumnSet = columns; FilterExpression filter = new FilterExpression(); filter.FilterOperator = LogicalOperator.And; filter.AddCondition(new ConditionExpression { AttributeName = "yearName", Operator = ConditionOperator.Equal, Values = new object[] { yearName} }); query.Criteria = filter; } But my questions are: A) What code do in addition to this to actually store the Guid? B) Is using a QueryExpression the most efficient way to do this?

    Read the article

  • delete all records except the id I have in a python list

    - by jay_t
    Hi all, I want to delete all records in a mysql db except the record id's I have in a list. The length of that list can vary and could easily contain 2000+ id's, ... Currently I convert my list to a string so it fits in something like this: cursor.execute("""delete from table where id not in (%s)""",(list)) Which doesn't feel right and I have no idea how long list is allowed to be, .... What's the most efficient way of doing this from python? Altering the structure of table with an extra field to mark/unmark records for deletion would be great but not an option. Having a dedicated table storing the id's would indeed be helpful then this can just be done through a sql query... but I would really like to avoid these options if possible. Thanks,

    Read the article

  • simple search in rails

    - by Adnan
    Hi, I'm making a simple search form in rails. In my search view I have two select boxes with fixed values like: SELECT BOX 1 SELECT BOX 2 ALL, ALL, FR, FR, US, US, DE DE And I have 2 fields in my DB with country_from and country_to. So for making a simple search like from FR to US I use: @search_result = Load.find(:all, :conditions => "country_from='#{params[:country_from]}' AND country_to='#{params[:country_to]}'" ) that is fine, but I need to implement the ALL option as well, so when I make a search like from DE to ALL I get a list with all countries in country_to I image I can do it with ifs...but what would be the most efficient way to do it?

    Read the article

  • is there a limit of merge tables with Mysql ?

    - by sysko
    I'm working on a database with mysql 5.0 for an open source project it's used to stored sentences in specific languages and their translations in other languages I used to have a big table "sentences" and "sentences_translations" (use to join sentences to sentences) table but has we have now near one million entries, this begin to be a bit slow, moreover, most of request are made using a "where lang =" so I've decided to create a table by language sentences_LANGUAGECODE and sentences_translation_LANGSOURCE_LANGTARGET and to create merge table like this sentences_ENG_OTHERS which merge sentences_ENG_ARA sentences_ENG_DEU etc... when we want to have the translations in all languages of an english sentence sentences_OTHERS_ENG when we want to have only the english translations of some sentences I've created a script to create all these tables (they're around 31 languages so more than 60 merge table), I've tested, that works really great a request which use to take 160ms now take only 30 :) but I discover that all my merge table after the 15th use to have "NULL" as type of storage engine instead of MRG_MYISAM, and if delete one, then I can create an others, using FLUSH table between each creation also allow me to create more merge tables so is this a limitation from mysql ? can we override it ? thanks for your answers

    Read the article

  • Would dropping X altogether hurt ?

    - by Xavier Maillard
    Hi, I live in the linux terminal all the time under my slackware GNU/linux system (an EeePC). By default, GNU Emacs won't start if It can't find several Xorg libraries. Assuming I will never use X software at all, would it make sense for me to drop all this Xorg stuff and compile emacs again ? Are you aware of anything that could get me into troubles or making GNU Emacs not working at all ? Are there any advantage for me to keep all these dependencies ? I am asking since as said, my main box is an eeepc with little storage and I am dangerously hitting the limits ;-) Regards

    Read the article

  • Recommand a Perl module to persist a large object for re-use between runs?

    - by Alnitak
    I've got a large XML file, which takes 40+ seconds to parse with XML::Simple. I'd like to be able to cache the resulting parsed object so that on the next run I can just retrieve the parsed object and not reparse the whole file. I've looked at using Data::Dumper but the documentation is a bit lacking on how to store and retrieve its output from disk files. Other classes I've looked at (e.g. Cache::Cache) appear designed for storage of many small objects, not a single large one. Can anyone recommend a module designed for this?

    Read the article

  • Twin edges - Half edge data structure

    - by Pradeep Kumar
    I have implemented a Half-edge data structure for loading 3d objects. I find that the part of assigning twin/pair edges takes the longest computation time (especially for objects which have hundreds of thousands half edges). The reason is that I use nested loops to accomplish this. Is there a simpler and efficient way of doing this? Below is the code which I've written. HE is the half-edge data structure. hearr is a vector containing all the half edges. vert is the starting vertex and end is the ending vertex. Thanks!! HE *e1,*e2; for(size_t i=0;i<hearr.size();i++){ e1=hearr[i]; for(size_t j=1;j<hearr.size();j++){ e2=hearr[j]; if((e1->vert==e2->end)&&(e2->vert==e1->end)){ e1->twin=e2; e2->twin=e1; } } }

    Read the article

  • How to check if the sum of some records equals the difference between two other records in t-sql?

    - by Dan Appleyard
    I have a view that contains bank account activity. ACCOUNT BALANCE_ROW AMOUNT SORT_ORDER 111 1 0.00 1 111 0 10.00 2 111 0 -2.50 3 111 1 7.50 4 222 1 100.00 5 222 0 25.00 6 222 1 125.00 7 ACCOUNT = account number BALANCE_ROW = either starting or ending balance would be 1, otherwise 0 AMOUNT = the amount SORT_ORDER = simple order to return the records in the order of start balance, activity, and end balance I need to figure out a way to see if the sum of the non balance_row rows equal the difference between the ending balance and the starting balance. The result for each account (1 for yes, 0 for no) would be simply added to the resulting result set. Example: Account 111 had a starting balance of 0.00. There were two account activity records of 10.00 and -2.5. That resulted in the ending balance of 7.50. I've been playing around with temp tables, but I was not sure if there is a more efficient way of accomplishing this. Thanks for any input you may have!

    Read the article

  • Why do I have to set the max length of every damn text column in the database?

    - by John Leidegren
    Why is it that every RDBMS insists that you tell it what the max length of a text field is going to be... why can't it just infer this information form the data that's put into the database? I've mostly worked with MS SQL Server, but every other database I know also demands that you set these arbitrary limits on your data schema. The reality is that this is not particulay helpful or friendly to work with becuase the business requirements change all the time and almost every day some end-user is trying to put a lot of text into that column. Does any one with some inner working knowledge of a RDBMS know why we just don't infer the limits from the data that's put into the storage? I'm not talking about guessing the type information, but guessing the limits of a particular text column. I mean, there's a reason why I don't use nvarchar(max) on every text column in the database.

    Read the article

  • Should I use multiple threads in this situation? [Ruby]

    - by mr popo
    I'm opening multiple files and processing them, one line at a time. The files contain tokens separating the data, such that sometimes the processing of one file may have to wait for others to catch up to that same token. I was doing this initially with only one thread and an array indicating with true/false if the file should be read in the current iteration or if it should wait for some of the others to catch up. Would using threads make this simpler? More efficient? Does Ruby have a mechanism for this?

    Read the article

  • Hyperlinks in VS2008 Test Result Details

    - by Red XIII
    In case when resulting string in "Test Result Details" (TRD) is very long, the Visual Studio 2008 crashes. I fixed this by sending the result data into a file. There is a problem, however, because there isn't a simple way to open such file. Of course, I can manually open folder and then the file, but it isn't very efficient. Now, to the questions part. Is there a possibility to include in the "Error Message" part of TRD a hyperlink to a file? (something similar to what we can already find in the stack trace part) If not, is there any way to add such functionality (easy opening of a file) to TRD? If not, are there any ways to expand the default reporting of VS? Thanks for any help.

    Read the article

< Previous Page | 265 266 267 268 269 270 271 272 273 274 275 276  | Next Page >