Search Results

Search found 5511 results on 221 pages for 'maintenance operations'.

Page 148/221 | < Previous Page | 144 145 146 147 148 149 150 151 152 153 154 155  | Next Page >

  • Database schema for Product Properties

    - by Chemosh
    As so many people I'm looking for a Products /Product Properties database schema. I'm using Ruby on Rails and (Thinking) Sphinx for faceted searches. Requirements: Adding new product types and their options should not require a change to the database schema Support faceted searches using Sphinx. Solutions I've come across: (See Bill Karwin's answer) Option 1: Single Table Inheritance Not an option really. The table would contain way to many columns. Option 2: Class Table Inheritance Ruby on Rails caches the database schema on start-up which means a restart whenever a new type of product is introduced. If you have a size able product catalog this could mean hundreds of tables. Option 3: Serialized LOB Kills being able to do faceted searches without heavy application logic. Option 4: Entity-Attribute-Value For testing purposes, EAV worked fine. However it could quickly become a mess and a maintenance hell as you add more and more options (e.g. when an option increase the prices or delivery time). What option should I go with? What other solutions are out there? Is there a silver bullet (ha) I overlooked?

    Read the article

  • Primary reasons why programming language runtimes use stacks?

    - by manuel aldana
    Many programming language runtime environments use stacks as their primary storage structure (e.g. see JVM bytecode to runtime example). Quickly recalling I see following advantages: Simple structure (pop/push), trivial to implement Most processors are anyway optimized for stack operations, so it is very fast Less problems with memory fragmentation, it is always about moving memory-pointer up and down for allocation and freeing complete blocks of memory by resetting the pointer to the last entry offset. Is the list complete or did I miss something? Are there programming language runtime environments which are not using stacks for storage at all?

    Read the article

  • Help me understand Dispose() implementation code from MSDN

    - by Benny
    the following code is from MSDN: Idisposable pattern protected virtual void Dispose(bool disposing) { // If you need thread safety, use a lock around these // operations, as well as in your methods that use the resource. if (!_disposed) { if (disposing) { if (_resource != null) _resource.Dispose(); Console.WriteLine("Object disposed."); } // Indicate that the instance has been disposed. _resource = null; _disposed = true; } } why the following statement: _resource = null; _disposed = true; are not enclosed by if (disposing) statement block? for me i would probably write like this: if (disposing) { if (_resource != null) { _resource.Dispose(); _resource = null; _disposed = true; } Console.WriteLine("Object disposed."); } anything wrong with my version?

    Read the article

  • Tridion Core Service - Transaction roll back isnt working

    - by Tamir Lahav
    We are using the core service in ASP.NET custom page in order to create pages and components and sevenral updates (checkout,save and chekin). we want those operations to work in transaction, we tried to implement it acording to some examples over the net. However we didn't succeded operating the rollback. It seems that the operation are immediately performed without waiting for the comit . The code we used for simple check out - roll back operation for example is TransactionOptions txOptions = new TransactionOptions { IsolationLevel = IsolationLevel.ReadCommitted }; using (TransactionScope scope = new TransactionScope( TransactionScopeOption.Required, txOptions)) { using (CoreService2010Client m_client = new CoreService2010Client()) { PageData pp = m_client.CheckOut("tcm:309-36311-64", false, new ReadOptions()) as PageData; } scope.Dispose(); } We also added this recomended configuration to the web config bindings section What are we missing ?

    Read the article

  • Design pattern to integrate Rails with a Comet server

    - by empire29
    I have a Ruby on Rails (2.3.5) application and an APE (Ajax Push Engine) server. When records are created within the Rails application, i need to push the new record out on applicable channels to the APE server. Records can be created in the rails app by the traditional path through the controller's create action, or it can be created by several event machines that are constantly monitoring various inputstream and creating records when they see data that meets a certain criteria. It seems to me that the best/right place to put the code that pushes the data out to the APE server (which in turn pushes it out to the clients) is in the Model's after_create hook (since not all record creations will flow through the controller's create action). The final caveat is I want to push a piece of formatted HTML out to the APE server (rather than a JSON representation of the data). The reason I want to do this is 1) I already have logic to produce the desired layout in existing partials 2) I don't want to create a javascript implementation of the partials (javascript that takes a JSON object and creates all the HTML around it for presentation). This would quickly become a maintenance nightmare. The problem with this is it would require "rendering" partials from within the Model (which im having trouble doing anyhow because they don't seem to have access to Helpers when they're rendered in this manner). Anyhow - Just wondering what the right way to go about organizing all of this is. Thanks

    Read the article

  • Switch to BigInteger if necessary

    - by fahdshariff
    I am reading a text file which contains numbers in the range [1, 10^100]. I am then performing a sequence of arithmetic operations on each number. I would like to use a BigInteger only if the number is out of the int/long range. One approach would be to count how many digits there are in the string and switch to BigInteger if there are too many. Otherwise I'd just use primitive arithmetic as it is faster. Is there a better way? Is there any reason why Java could not do this automatically i.e. switch to BigInteger if an int was too small? This way we would not have to worry about overflows.

    Read the article

  • Sql Server 2000 Stored Procedure Prevent Parallelism or something?

    - by user187305
    I have a huge disgusting stored procedure that wasn't slow a couple months ago, but now is. I barely know what this thing does and I am in no way interested in rewriting it. I do know that if I take the body of the stored procedure and then declare/set the values of the parameters and run it in query analyzer that it runs more than 20x faster. From the internet, I've read that this is probably due to a bad cached query plan. So, I've tried running the sp with "WITH RECOMPILE" after the EXEC and I've also tried putting the "WITH RECOMPLE" inside the sp, but neither of those helped even a little bit. When I look at the execution plan of the sp vs the query, the biggest difference is that the sp has "Parallelism" operations all over the place and the query doesn't have any. Can this be the cause of the difference in speeds? Thank you, any ideas would be great... I'm stuck.

    Read the article

  • Can this loop be sped up in pure Python?

    - by Noctis Skytower
    I was trying out an experiment with Python, trying to find out how many times it could add one to an integer in one minute's time. Assuming two computers are the same except for the speed of the CPUs, this should give an estimate of how fast some CPU operations may take for the computer in question. The code below is an example of a test designed to fulfill the requirements given above. This version is about 20% faster than the first attempt and 150% faster than the third attempt. Can anyone make any suggestions as to how to get the most additions in a minute's time span? Higher numbers are desireable. EDIT: This experiment is being written in Python 3.1 and is 15% faster than the fourth speed-up attempt. def start(seconds): import time, _thread def stop(seconds, signal): time.sleep(seconds) signal.pop() total, signal = 0, [None] _thread.start_new_thread(stop, (seconds, signal)) while signal: total += 1 return total if __name__ == '__main__': print('Testing the CPU speed ...') print('Relative speed:', start(60))

    Read the article

  • What's currently the best way to extend Excel using C#?

    - by user169867
    I have the professional versions of VS2008 & VS2010. I wish to add a couple buttons to a toolbar in Excel. When they are clicked I'd like to be able to open a form (either WinForms or WPF is fine) collect a few values from the user in the form and then take that data + read cell values from the current worksheet to perform some database operations. What's currently the best way to do this using C#? I'd greatly appreciate a pointer to any examples / tutorials. My understanding is that VS2010 has improved the process alot but I may have to deal w/ Excel 2003 which I don't think it supports. I get confused between Visual Studio 2008s Extensibility-Shared Addin template and other Office Addin templates I've seen. I'm not sure when which type of solution is appropriate. I'm new to Office development so I'd really appreciate any help to get me going on the right track. Thanks much.

    Read the article

  • What data is actually stored in a B-tree database in CouchDB?

    - by Andrey Vlasovskikh
    I'm wondering what is actually stored in a CouchDB database B-tree? The CouchDB: The Definitive Guide tells that a database B-tree is used for append-only operations and that a database is stored in a single B-tree (besides per-view B-trees). So I guess the data items that are appended to the database file are revisions of documents, not the whole documents: +---------|### ... | | +------|###|------+ ... ---+ | | | | +------+ +------+ +------+ +------+ | doc1 | | doc2 | | doc1 | ... | doc1 | | rev1 | | rev1 | | rev2 | | rev7 | +------+ +------+ +------+ +------+ Is it true? If it is true, then how the current revision of a document is determined based on such a B-tree? Doesn't it mean, that CouchDB needs a separate "view" database for indexing current revisions of documents to preserve O(log n) access? Wouldn't it lead to race conditions while building such an index? (as far as I know, CouchDB uses no write locks).

    Read the article

  • Cuda program results are always zero in HW, correct in EMU??

    - by Orion Nebula
    Hi all! I am having a weird problem .. I have written a CUDA code which executes correctly in emulation and all results show up.. however, when executed on hardware "G210" .. the results in the result memory are always 0 I am passing two vectors to the kernel, one with random variables the other is initialized to zero, the code copies the first vector to shared memory, does some swapping and other operations and then writes back the results on the second vector (the one with the initial 0's) I am using double precision, the -arch sm13 flag is used, all memory allocation also use sizeof(double) .. I have checked if the kernel is invoked, it does .. so no problems here .. the cudaMemCpy has no problems .. what could be the problem .. :( why would it work in emulation but not on HW I am quite confused .. any ideas?

    Read the article

  • How to use a object whose copy constructor and copy assignment is private?

    - by coanor
    In reading TCPL, I got a problem, as the title refered, and then 'private' class is: class Unique_handle { private: Unique_handle& operator=(const Unique_handle &rhs); Unique_handle(const Unique_handle &rhs); public: //... }; the using code is: struct Y { //... Unique_handle obj; }; and I want to execute such operations: int main() { Y y1; Y y2 = y1; } although, these code are come from TCPL, but I still can not got the solution... Can anybody help me, appreciate.

    Read the article

  • Can I turn off implicit Python unicode conversions to find my mixed-strings bugs?

    - by Tal Weiss
    When profiling our code I was surprised to find millions of calls to C:\Python26\lib\encodings\utf_8.py:15(decode) I started debugging and found that across our code base there are many small bugs, usually comparing a string to a unicode or adding a sting and a unicode. Python graciously decodes the strings and performs the following operations in unicode. How kind. But expensive! I am fluent in unicode, having read Joel Spolsky and Dive Into Python... I try to keep our code internals in unicode only. My question - can I turn off this pythonic nice-guy behavior? At least until I find all these bugs and fix them (usually by adding a u'u')? Some of them are extremely hard to find (a variable that is sometimes a string...). Python 2.6.5 (and I can't switch to 3.x).

    Read the article

  • Hebbian learning

    - by Bane
    I have asked another question on Hebbian learning before, and I guess I got a good answer which I accepted, but, the problem is that I now realize that I've mistaken about Hebbian learning completely, and that I'm a bit confused. So, could you please explain how it can be useful, and what for? Because the way Wikipedia and some other pages describe it - it doesn't make sense! Why would we want to keep increasing the weight between the input and the output neuron if the fire together? What kind of problems can it be used to solve, because when I simulate it in my head, it certainly can't do the basic AND, OR, and other operations (say you initialize the weights at zero, the output neurons never fire, and the weights are never increased!)

    Read the article

  • Known problems with filemtime() on Windows - files getting touched arbitrarily?

    - by Pekka
    Is there a known issue leading to file modification times of cache files on Windows XP SP 3 getting arbitrarily updated, but without any actual change? Is there some service on a standard Windows XP - Backup, Sync, Versioning, Virus scanner - known to touch files? They all have a .txt extension. If there isn't, forget it. Then I'm getting something wrong in my cache routines, and I'll debug my way through. Background: I'm building a simple caching wrapper around a slow web site on a Windows server. I am comparing the filemtime() time stamp to some columns in the data base to determine whether a cached file is stale. I'm having problems using this method because the modification time of the cache files seems to get updated in between operations without me doing anything. THis results in stale files being displayed. I'm the only user on the machine. The operating system is Windows XP, the webserver a XAMPP Apache 2 with PHP 5.2

    Read the article

  • PHP file outside doc root needs files outside and inside the document root

    - by jax
    I have a library of classes, all interrelated. Some files are inside the document root and some are outside using the <Directory> and Alias features in httpd.conf Assuming I have 3 files: webroot.php (Inside the document root) alias_directory.php (Inside a folder outside the doc root) alias_directory2.php (Inside a **different** folder outside the doc root) If alias_directory2.php needs both webroot.php and alias_directory.php, This does not work. (Remember alias_directory.php and alias_directory2.php are not in the same locations) require_once $_SERVER['DOCUMENT_ROOT'].'/webroot.php'; //(ok) require_once $_SERVER['DOCUMENT_ROOT'].'/alias_directory.php'; //(not ok) This does not work because alias_directory.php is not in the doc root. Similarly require_once $_SERVER['DOCUMENT_ROOT'].'/webroot.php'; //(ok) require_once dirname(__FILE__).'/alias_directory.php'; //(not ok) The problem here is that dirname(__FILE__) will return the path for alias_directory2.php not alias_directory.php. This works: require_once $_SERVER['DOCUMENT_ROOT'].'/webroot.php'; //(ok) require_once '/full/path/to/directory/alias_directory.php'; //(ok) But is very nasty and is a maintenance nightmare if I decide to move my library to another location. How do I solve this problem, is seems that I need a way to resolve an Alias folder properly.

    Read the article

  • How might one implement FileTimeToSystemTime?

    - by Billy ONeal
    Hello :) I'm writing a simple wrapper around the Win32 FILETIME structure. boost::datetime has most of what I want, except I need whatever date type I end up using to interpolate with Windows APIs without issues. To that end, I've decided to write my own things for doing this -- most of the operations aren't all that complicated. I'm implementing the TimeSpan - like type at this point, but I'm unsure how I'd implement FileTimeToSystemTime. I could just use the system's built-in FileTimeToSystemTime function, except FileTimeToSystemTime cannot handle negative dates -- I need to be able to represent something like "-12 seconds". How should something like this be implemented? Billy3

    Read the article

  • Running MSBuild script on development machine

    - by devdigital
    Hi, I have an MSBuild script which performs a lot of tasks, as it is run on our build server. I want the script to be run each time a developer builds from Visual Studio on their local development machine, so that a) the build process they are runnning locally is the same as that run by the build server so any problems in the build can be identified immediately by the developer b) many of the operations of the build script are run on local builds, for example running of unit tests, generation of code coverage reports etc How is this possible in Visual Studio (2008)? Note I am running a single solution product with multiple projects.

    Read the article

  • Time complexity O() of isPalindrome()

    - by Aran
    I have this method, isPalindrome(), and I am trying to find the time complexity of it, and also rewrite the code more efficiently. boolean isPalindrome(String s) { boolean bP = true; for(int i=0; i<s.length(); i++) { if(s.charAt(i) != s.charAt(s.length()-i-1)) { bP = false; } } return bP; } Now I know this code checks the string's characters to see whether it is the same as the one before it and if it is then it doesn't change bP. And I think I know that the operations are s.length(), s.charAt(i) and s.charAt(s.length()-i-!)). Making the time-complexity O(N + 3), I think? This correct, if not what is it and how is that figured out. Also to make this more efficient, would it be good to store the character in temporary strings?

    Read the article

  • Where can I find soft-multiply and divide algorithms?

    - by srking
    I'm working on a micro-controller without hardware multiply and divide. I need to cook up software algorithms for these basic operations that are a nice balance of compact size and efficiency. My C compiler port will employ these algos, not the the C developers themselves. My google-fu is so far turning up mostly noise on this topic. Can anyone point me to something informative? I can use add/sub and shift instructions. Table lookup based algos might also work for me, but I'm a bit worried about cramming so much into the compiler's back-end...um, so to speak. Thanks!

    Read the article

  • Storing And Using Microsoft User Account Credentials in SQL Server 2008 database

    - by user337501
    I'm not exactly positive how to word this for the sake of the title so please forgive me. Also I can't seem to figure out how to even google this question, so I'm hoping that I can get a lead in the right direction. Part of my software(VB.NET App) requires the ability to access/read/write a shared network folder. I have an option for the user to specify any credentials that might be needed to access said folder. I want to store these credentials given in the SQL Server database as part of the config (I have a table which contains configuration). My concern is that the password for the user account will be unencrpyted. Yet, if I encrypt the password the VB.NET App And/Or database will be unable to use the credentials for file i/o operations unless the Password is unencrypted before use. I'm fishing for suggestions on how to better handle this situation.

    Read the article

  • How to limit data to users who own it without limiting admin users in CakePHP?

    - by cdburgess
    Currently I am writing an application where I have multiple users. They have data that should only be visible to them and not the other authenticated users in the system. I also have administrators who manage the system and have access to all of the information. What is the best way to limit users to their data without limiting admin users? Currently I am using a callback to limit the queries by user, but the admin will get the same limits. So I need to know a better way to do it. More importantly, the right way to do it. For example, I want the standard user to be able to see their user information only and be limited to CRUD operations on their information only. The admin, however, should be able to see ALL users and CRUD ALL user data. Any ideas?

    Read the article

  • Making self-logging modules with Log::Log4perl

    - by Oesor
    Is there a way to use Log::Log4perl to make a smart self-logging module that logs its operations to a file even in the absence of the calling script not initializing Log4perl? As far as I can tell from the documentation, the only way to use Log4perl is to initialize it in the running script from a configuration, then modules implementing Log4perl calls log themselves based on the caller's Log4perl config. Instead, I'd like the modules to provide a default initialization config for Log4perl. This would provide the default file appender for the module's category. Then, I could override this behavior by initing Log4perl in the caller with a different config if needed, and everything would hopefully just work. Is this sort of defensive logging behavior possible or am I going to need to rely on initing Log4perl in every .pl script that calls the module I want logged?

    Read the article

  • Is there any reason why someone would want to create an Core Data model programmatically?

    - by mystify
    I wonder in which cases it would be good to make an NSManagedObjectModel completely programmatically, with NSEntityDescription instances and all this stuff. I'm that kind of person who prefers to code programmatically, rejecting Interface Builder. But when it comes to Core Data, I have a hard time figuring out why I should kill my time NOT using the nice Xcode Data Modeler tool. And since data models are stuck to a given state (except when you want to do some ugly migration operations where thinks probably go wrong and users get mad, really mad), I see no big sense in a data model that's made programmatically for the purpose of changing it all the time. Did I miss something?

    Read the article

  • Python: Huge file reading by using linecache Vs normal file access open()

    - by user335223
    Hi, I am in a situation where multiple threads reading the same huge file with mutliple file pointers to same file. The file will have atleast 1 million lines. Eachline's length varies from 500 characters to 1500 characters. There won't "write" operations on the file. Each thread will start reading the same file from different lines. Which is the efficient way..? Using the Python's linecache or normal readline() or is there anyother effient way?

    Read the article

< Previous Page | 144 145 146 147 148 149 150 151 152 153 154 155  | Next Page >