Search Results

Search found 4050 results on 162 pages for 'requirements'.

Page 146/162 | < Previous Page | 142 143 144 145 146 147 148 149 150 151 152 153  | Next Page >

  • Square Peg Web: Gets you the traffic to where it matters most: Your Website!

    - by demetriusalwyn
    Have you decided to start your business online or is your business not reaching the targeted audience? Come to Square Peg Web; where you will find what you want to make your business reach new heights. The team at Square Peg Web is professionals who understand what you want and make sure you get it right. Our confidence stems from the fact of thousands of satisfied clients who keep referring friends and business associates to us and we do not let our clients down. Many companies promise the sky but how far is does their work live up to the promises? We do not know about the others however, we are sure that we strive to put together all our ideas and thoughts to make your website rank among the top. Web hosting is something that needs to have a personal touch; Square Peg Web customizes everything to suit your requirements so that you do not have to look further. With Square Peg Web you have a host of features to make your Business go viral. Some of the product details that are offered with Square Peg Web are unlimited product options/ variants/ properties giving you an option on price modifiers. You get unlimited customized input fields for your products and you can also Customer-define the prices. Square Peg Web provides you an option of using multiple product images with zoom features and one can also list a particular product in several categories. There are other aspects which make Square Peg Web the best choice for your website needs; every sale of yours’ is important to you and to us. We make sure that each sale is tracked by the product and also the list of bestsellers that appeal to the audience. Other comprehensive statistics of Square Peg Web includes searchable order data, an interface for shipments and order fulfillments, export sales & customer data for usage in a spreadsheet and the ability to export orders to QuickBooks format. With Square Peg Web; Admin Panel is a lot simpler. Administrative access is completely password protected and any changes done are all in real-time. You can have absolute control on the cart from anywhere around the world using your web browser and the topping on the cake is the unlimited amount of admin accounts that can be created for you. Square Peg Web offers you a world of experience with the options of choosing from marketing websites to e-commerce and from customized applications to community oriented sites. Some of the projects which appear in the portfolio of Square Peg Web are Online Marketing Web Sites, E-Commerce Web Sites, customized web applications, Blog designing and programming, video sharing and the option of downloading web sites, online advertisements, flash animation, customer and product support web sites, web site re-designing and planning and complete information architecture.

    Read the article

  • Prevent SQL Injection in Dynamic column names

    - by Mr Shoubs
    I can't get away without writing some dynamic sql conditions in a part of my system (using Postgres). My question is how best to avoid SQL Injection with the method I am currently using. EDIT (Reasoning): There are many of columns in a number of tables (a number which grows (only) and is maintained elsewhere). I need a method of allowing the user to decide which (predefined) column they want to query (and if necessary apply string functions to). The query itself is far too complex for the user to write themselves, nor do they have access to the db. There are 1000's of users with varying requirements and I need to remain as flexible as possible - I shouldn't have to revisit the code unless the main query needs to change - Also, there is no way of knowing what conditions the user will need to use. I have objects (received via web service) that generates a condition (the generation method is below - it isn't perfect yet) for some large sql queries. The _FieldName is user editable (parameter name was, but it didn't need to be) and I am worried it could be an attack vector. I put double quotes (see quoted identifier) around the field name in an attempt to sanitize the string, this way it can never be a key word. I could also look up the field name against a list of fields, but it would be difficult to maintain on a timely basis. Unfortunately the user must enter the condition criteria, I am sure there must be more I can add to the sanatize method? and does quoting the column name make it safe? (my limited testing seems to think so). an example built condition would be "AND upper(brandloaded.make) like 'O%' and upper(brandloaded.make) not like 'OTHERBRAND'" ... Any help or suggestions are appreciated. Public Function GetCondition() As String Dim sb As New Text.StringBuilder 'put quote around the table name in an attempt to prevent some sql injection 'http://www.postgresql.org/docs/8.2/static/sql-syntax-lexical.html sb.AppendFormat(" {0} ""{1}"" ", _LogicOperator.ToString, _FieldName) Select Case _ConditionOperator Case ConditionOperatorOptions.Equals sb.Append(" = ") ... End Select sb.AppendFormat(" {0} ", Me.UniqueParameterName) 'for parameter Return Me.Sanitize(sb) End Function Private Function Sanitize(ByVal sb As Text.StringBuilder) As String 'compare against a similar blacklist mentioned here: http://forums.asp.net/t/1254125.aspx sb.Replace(";", "") sb.Replace("'", "") sb.Replace("\", "") sb.Replace(Chr(8), "") Return sb.ToString End Function Public ReadOnly Property UniqueParameterName() As String Get Return String.Concat(":" _UniqueIdentifier) End Get End Property

    Read the article

  • Choosing a distributed shared memory solution

    - by mindas
    I have a task to build a prototype for a massively scalable distributed shared memory (DSM) app. The prototype would only serve as a proof-of-concept, but I want to spend my time most effectively by picking the components which would be used in the real solution later on. The aim of this solution is to take data input from an external source, churn it and make the result available for a number of frontends. Those "frontends" would just take the data from the cache and serve it without extra processing. The amount of frontend hits on this data can literally be millions per second. The data itself is very volatile; it can (and does) change quite rapidly. However the frontends should see "old" data until the newest has been processed and cached. The processing and writing is done by a single (redundant) node while other nodes only read the data. In other words: no read-through behaviour. I was looking into solutions like memcached however this particular one doesn't fulfil all our requirements which are listed below: The solution must at least have Java client API which is reasonably well maintained as the rest of app is written in Java and we are seasoned Java developers; The solution must be totally elastic: it should be possible to add new nodes without restarting other nodes in the cluster; The solution must be able to handle failover. Yes, I realize this means some overhead, but the overall served data size isn't big (1G max) so this shouldn't be a problem. By "failover" I mean seamless execution without hardcoding/changing server IP address(es) like in memcached clients when a node goes down; Ideally it should be possible to specify the degree of data overlapping (e.g. how many copies of the same data should be stored in the DSM cluster); There is no need to permanently store all the data but there might be a need of post-processing of some of the data (e.g. serialization to the DB). Price. Obviously we prefer free/open source but we're happy to pay a reasonable amount if a solution is worth it. In any way, paid 24hr/day support contract is a must. The whole thing has to be hosted in our data centers so SaaS offerings like Amazon SimpleDB are out of scope. We would only consider this if no other options would be available. Ideally the solution would be strictly consistent (as in CAP); however, eventual consistence can be considered as an option. Thanks in advance for any ideas.

    Read the article

  • How to know when a user has really released a key in Java?

    - by Luis Soeiro
    (Edited for clarity) I want to detect when a user presses and releases a key in Java Swing, ignoring the keyboard auto repeat feature. I also would like a pure Java approach the works on Linux, Mac OS and Windows. Requirements: When the user presses some key I want to know what key is that; When the user releases some key, I want to know what key is that; I want to ignore the system auto repeat options: I want to receive just one keypress event for each key press and just one key release event for each key release; If possible, I would use items 1 to 3 to know if the user is holding more than one key at a time (i.e, she hits 'a' and without releasing it, she hits "Enter"). The problem I'm facing in Java is that under Linux, when the user holds some key, there are many keyPress and keyRelease events being fired (because of the keyboard repeat feature). I've tried some approaches with no success: Get the last time a key event occurred - in Linux, they seem to be zero for key repeat, however, in Mac OS they are not; Consider an event only if the current keyCode is different from the last one - this way the user can't hit twice the same key in a row; Here is the basic (non working) part of code: import java.awt.event.KeyListener; public class Example implements KeyListener { public void keyTyped(KeyEvent e) { } public void keyPressed(KeyEvent e) { System.out.println("KeyPressed: "+e.getKeyCode()+", ts="+e.getWhen()); } public void keyReleased(KeyEvent e) { System.out.println("KeyReleased: "+e.getKeyCode()+", ts="+e.getWhen()); } } When a user holds a key (i.e, 'p') the system shows: KeyPressed: 80, ts=1253637271673 KeyReleased: 80, ts=1253637271923 KeyPressed: 80, ts=1253637271923 KeyReleased: 80, ts=1253637271956 KeyPressed: 80, ts=1253637271956 KeyReleased: 80, ts=1253637271990 KeyPressed: 80, ts=1253637271990 KeyReleased: 80, ts=1253637272023 KeyPressed: 80, ts=1253637272023 ... At least under Linux, the JVM keeps resending all the key events when a key is being hold. To make things more difficult, on my system (Kubuntu 9.04 Core 2 Duo) the timestamps keep changing. The JVM sends a key new release and new key press with the same timestamp. This makes it hard to know when a key is really released. Any ideas? Thanks

    Read the article

  • Discovering a functional algorithm from a mutable one

    - by Garrett Rowe
    This isn't necessarily a Scala question, it's a design question that has to do with avoiding mutable state, functional thinking and that sort. It just happens that I'm using Scala. Given this set of requirements: Input comes from an essentially infinite stream of random numbers between 1 and 10 Final output is either SUCCEED or FAIL There can be multiple objects 'listening' to the stream at any particular time, and they can begin listening at different times so they all may have a different concept of the 'first' number; therefore listeners to the stream need to be decoupled from the stream itself. Pseudocode: if (first number == 1) SUCCEED else if (first number >= 9) FAIL else { first = first number rest = rest of stream for each (n in rest) { if (n == 1) FAIL else if (n == first) SUCCEED else continue } } Here is a possible mutable implementation: sealed trait Result case object Fail extends Result case object Succeed extends Result case object NoResult extends Result class StreamListener { private var target: Option[Int] = None def evaluate(n: Int): Result = target match { case None => if (n == 1) Succeed else if (n >= 9) Fail else { target = Some(n) NoResult } case Some(t) => if (n == t) Succeed else if (n == 1) Fail else NoResult } } This will work but smells to me. StreamListener.evaluate is not referentially transparent. And the use of the NoResult token just doesn't feel right. It does have the advantage though of being clear and easy to use/code. Besides there has to be a functional solution to this right? I've come up with 2 other possible options: Having evaluate return a (possibly new) StreamListener, but this means I would have to make Result a subtype of StreamListener which doesn't feel right. Letting evaluate take a Stream[Int] as a parameter and letting the StreamListener be in charge of consuming as much of the Stream as it needs to determine failure or success. The problem I see with this approach is that the class that registers the listeners should query each listener after each number is generated and take appropriate action immediately upon failure or success. With this approach, I don't see how that could happen since each listener is forcing evaluation of the Stream until it completes evaluation. There is no concept here of a single number generation. Is there any standard scala/fp idiom I'm overlooking here?

    Read the article

  • Complex SQL query with group by and two rows in one

    - by Ricket
    Okay, I need help. I'm usually pretty good at SQL queries but this one baffles me. By the way, this is not a homework assignment, it's a real situation in an Access database and I've written the requirements below myself. Here is my table layout. It's in Access 2007 if that matters; I'm writing the query using SQL. Id (primary key) PersonID (foreign key) EventDate NumberOfCredits SuperCredits (boolean) There are events that people go to. They can earn normal credits, or super credits, or both at one event. The SuperCredits column is true if the row represents a number of super credits earned at the event, or false if it represents normal credits. So for example, if there is an event which person 174 attends, and they earn 3 normal credits and 1 super credit at the event, the following two rows would be added to the table: ID PersonID EventDate NumberOfCredits SuperCredits 1 174 1/1/2010 3 false 2 174 1/1/2010 1 true It is also possible that the person could have done two separate things at the event, so there might be more than two columns for one event, and it might look like this: ID PersonID EventDate NumberOfCredits SuperCredits 1 174 1/1/2010 1 false 2 174 1/1/2010 2 false 3 174 1/1/2010 1 true Now we want to print out a report. Here will be the columns of the report: PersonID LastEventDate NumberOfNormalCredits NumberOfSuperCredits The report will have one row per person. The row will show the latest event that the person attended, and the normal and super credits that the person earned at that event. What I am asking of you is to write, or help me write, the SQL query to SELECT the data and GROUP BY and SUM() and whatnot. Or, let me know if this is for some reason not possible, and how to organize my data to make it possible. This is extremely confusing and I understand if you do not take the time to puzzle through it. I've tried to simplify it as much as possible, but definitely ask any questions if you give it a shot and need clarification. I'll be trying to figure it out but I'm having a real hard time with it, this is grouping beyond my experience...

    Read the article

  • Optimizing tasks to reduce CPU in a trading application

    - by Joel
    Hello, I have designed a trading application that handles customers stocks investment portfolio. I am using two datastore kinds: Stocks - Contains unique stock name and its daily percent change. UserTransactions - Contains information regarding a specific purchase of a stock made by a user : the value of the purchase along with a reference to Stock for the current purchase. db.Model python modules: class Stocks (db.Model): stockname = db.StringProperty(multiline=True) dailyPercentChange=db.FloatProperty(default=1.0) class UserTransactions (db.Model): buyer = db.UserProperty() value=db.FloatProperty() stockref = db.ReferenceProperty(Stocks) Once an hour I need to update the database: update the daily percent change in Stocks and then update the value of all entities in UserTransactions that refer to that stock. The following python module iterates over all the stocks, update the dailyPercentChange property, and invoke a task to go over all UserTransactions entities which refer to the stock and update their value: Stocks.py # Iterate over all stocks in datastore for stock in Stocks.all(): # update daily percent change in datastore db.run_in_transaction(updateStockTxn, stock.key()) # create a task to update all user transactions entities referring to this stock taskqueue.add(url='/task', params={'stock_key': str(stock.key(), 'value' : self.request.get ('some_val_for_stock') }) def updateStockTxn(stock_key): #fetch the stock again - necessary to avoid concurrency updates stock = db.get(stock_key) stock.dailyPercentChange= data.get('some_val_for_stock') # I get this value from outside ... some more calculations here ... stock.put() Task.py (/task) # Amount of transaction per task amountPerCall=10 stock=db.get(self.request.get("stock_key")) # Get all user transactions which point to current stock user_transaction_query=stock.usertransactions_set cursor=self.request.get("cursor") if cursor: user_transaction_query.with_cursor(cursor) # Spawn another task if more than 10 transactions are in datastore transactions = user_transaction_query.fetch(amountPerCall) if len(transactions)==amountPerCall: taskqueue.add(url='/task', params={'stock_key': str(stock.key(), 'value' : self.request.get ('some_val_for_stock'), 'cursor': user_transaction_query.cursor() }) # Iterate over all transaction pointing to stock and update their value for transaction in transactions: db.run_in_transaction(updateUserTransactionTxn, transaction.key()) def updateUserTransactionTxn(transaction_key): #fetch the transaction again - necessary to avoid concurrency updates transaction = db.get(transaction_key) transaction.value= transaction.value* self.request.get ('some_val_for_stock') db.put(transaction) The problem: Currently the system works great, but the problem is that it is not scaling well… I have around 100 Stocks with 300 User Transactions, and I run the update every hour. In the dashboard, I see that the task.py takes around 65% of the CPU (Stock.py takes around 20%-30%) and I am using almost all of the 6.5 free CPU hours given to me by app engine. I have no problem to enable billing and pay for additional CPU, but the problem is the scaling of the system… Using 6.5 CPU hours for 100 stocks is very poor. I was wondering, given the requirements of the system as mentioned above, if there is a better and more efficient implementation (or just a small change that can help with the current implemntation) than the one presented here. Thanks!! Joel

    Read the article

  • How to enable UAC prompt through programming

    - by peter
    I want to implement a UAC prompt for an application in visualc++ the operating system is 32bit x7460(2processor) Windowsserver 2008 the exe is myproject.exe through manifest.. Here for testing i wl build the application in Windows XP OS and copy the exe in to system containg the Windowsserver 2008 machine and replace it So what i did is i added a manifest like this name of that is myproject.exe.manifest My project has 3 folders like Headerfile,Resourcefile and Source file.I added this manifest(myproject.exe.manifest) in the Sourcefile folder containing other cpp and c code <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0"> <assemblyIdentity version="11.1.4.0" processorArchitecture="X7460" name="myproject" type="win32"/> <description>myproject Problem</description> <!-- Identify the application security requirements. --> <trustInfo xmlns="urn:schemas-microsoft-com:asm.v2"> <security> <requestedPrivileges> <requestedExecutionLevel level="requireAdministrator" uiAccess="false"/> </requestedPrivileges> </security> </trustInfo> </assembly> then i added this line of code in Resourcefile(.rc).Means one header file is there(Myproject.h).I added the line of code there #define MANIFEST_RESOURCE_ID 1 MANIFEST_RESOURCE_ID RT_MANIFEST "myproject.exe.manifest" Finally i did the following step 1. Open your project in Microsoft Visual Studio 2005. 2. Under Project, select Properties. 3. In Properties, select Manifest Tool, and then select Input and Output. 4. Add in the name of your application manifest file under Additional manifest files. 5. Rebuild your application. But i am getting lot of Syntax errors Is there any problems in the way which i followed.If i commented the line define MANIFEST_RESOURCE_ID 1 MANIFEST_RESOURCE_ID RT_MANIFEST "myproject.exe.manifest" which added in Myproject.h for adding values in .rc file there willnot any error other than this general error c1010070: Failed to load and parse the manifest. The system cannot find the file specified. .\myproject.exe.manifest How to enable UAC prompt through programming

    Read the article

  • File sizing issue in DOS/FAT

    - by Heather
    I've been tasked with writing a data collection program for a Unitech HT630, which runs a proprietary DOS operating system that can run executables compiled for 16-bit MS DOS with some restrictions. I'm using the Digital Mars C/C++ compiler, which is working well thus far. One of the application requirements is that the data file must be human-readable plain text, meaning the file can be imported into Excel or opened by Notepad. I'm using a variable length record format much like CSV that I've successfully implemented using the C standard library file I/O functions. When saving a record, I have to calculate whether the updated record is larger or smaller than the version of the record currently in the data file. If larger, I first shift all records immediately after the current record forward by the size difference calculated before saving the updated record. EOF is extended automatically by the OS to accommodate the extra data. If smaller, I shift all records backwards by my calculated offset. This is working well, however I have found no way to modify the EOF marker or file size to ignore the data after the end of the last record. Most of the time records will grow in size because the data collection program will be filling some of the empty fields with data when saving a record. Records will only shrink in size when a correction is made on an existing entry, or on a normal record save if the descriptive data in the record is longer than what the program reads in memory. In the situation of a shrinking record, after the last record in the file I'm left with whatever data was sitting there before the shift. I have been writing an EOF delimiter into the file after a "shrinking record save" to signal where the end of my records are and space-filling the remaining data, but then I no longer have a clean file until a "growing record save" extends the size of the file over the space-filled area. The truncate() function in unistd.h does not work (I'm now thinking this is for *nix flavors only?). One proposed solution I've seen involves creating a second file and writing all the data you wish to save into that file, and then deleting the original. Since I only have 4MB worth of disk space to use, this works if the file size is less than 2MB minus the size of my program executable and configuration files, but would fail otherwise. It is very likely that when this goes into production, users would end up with a file exceeding 2MB in size. I've looked at Ralph Brown's Interrupt List and the interrupt reference in IBM PC Assembly Language and Programming and I can't seem to find anything to update the file size or similar. Is reducing a file's size without creating a second file even possible in DOS?

    Read the article

  • Custom dynamic error pages in Ruby on Rails not working

    - by PlanetMaster
    Hi, I'm trying to implement custom dynamic error pages following this post: http://www.perfectline.co.uk/blog/custom-dynamic-error-pages-in-ruby-on-rails I did exactly what the blog post says. I included config.action_controller.consider_all_requests_local = false in my environment.rb. But is not working. My browser shows: Routing Error No route matches "/555" with {:method=>:get} So, it looks like the rescues are not fired. I get the following in my log file: ActionController::RoutingError (No route matches "/555" with {:method=>:get}): Rendering rescues/layout (not_found) Is there some routing interfering with the code? I'm not sure what to look for. I'm running rails 2.3.5. Here is the routes.rb file: ActionController::Routing::Routes.draw do |map| # routing van property-url map.connect 'buy/:property_type_plural/:province/:city/:address/:house_number', :controller => 'properties' , :action => 'show', :id => 'whatever' map.myimmonatie 'myimmonatie' , :controller => 'myimmonatie/properties', :action => 'index' map.login "login", :controller => "user_sessions", :action => "create", :conditions => {:method => :post} map.login "login", :controller => "user_sessions", :action => "new" map.logout "logout", :controller => "user_sessions", :action => "destroy" map.buy "buy", :controller => 'buy' map.sell "sell", :controller => 'sell' map.home "home", :controller => 'home' map.disclaimer "disclaimer", :controller => 'disclaimer' map.sign_up "sign_up", :controller => 'users', :action => :new map.contact "contact", :controller => 'contact' map.resources :user_sessions map.resources :contact map.resources :password_resets map.resources :messages map.resources :users, :only => [:index,:new,:create,:activate,:edit,:profile,:password] map.resources :images map.resources :activation , :only => [:new,:resend] map.resources :email map.resources :properties, :except => [:index,:destroy] map.namespace :admin do |admin| admin.resources :users admin.resources :properties admin.resources :order_items, :as => :orders admin.resources :blog_posts, :as => :blog end map.connect 'myimmonatie/:action' , :controller => 'users', :id => 'current', :requirements => {:action => /(profile)|(password)|(email)/} map.namespace :myimmonatie do |myimmonatie| myimmonatie.resources :messages, :controller => 'messages' myimmonatie.resources :password, :as => "password", :controller => 'users', :action => 'password' myimmonatie.resources :properties , :controller => 'properties' myimmonatie.resources :orders , :only => [:index,:show,:create,:new] end map.root :controller => "home" map.connect ':controller/:action' map.connect ':controller/:action/:id' map.connect ':controller/:action/:id.:format' end ActionController::Routing::Translator.translate_from_file('config','i18n-routes.yml')

    Read the article

  • Deterministic key serialization

    - by Mike Boers
    I'm writing a mapping class which uses SQLite as the storage backend. I am currently allowing only basestring keys but it would be nice if I could use a couple more types hopefully up to anything that is hashable (ie. same requirements as the builtin dict). To that end I would like to derive a deterministic serialization scheme. Ideally, I would like to know if any implementation/protocol combination of pickle is deterministic for hashable objects (e.g. can only use cPickle with protocol 0). I noticed that pickle and cPickle do not match: >>> import pickle >>> import cPickle >>> def dumps(x): ... print repr(pickle.dumps(x)) ... print repr(cPickle.dumps(x)) ... >>> dumps(1) 'I1\n.' 'I1\n.' >>> dumps('hello') "S'hello'\np0\n." "S'hello'\np1\n." >>> dumps((1, 2, 'hello')) "(I1\nI2\nS'hello'\np0\ntp1\n." "(I1\nI2\nS'hello'\np1\ntp2\n." Another option is to use repr to dump and ast.literal_eval to load. This would only be valid for builtin hashable types. I have written a function to determine if a given key would survive this process (it is rather conservative on the types it allows): def is_reprable_key(key): return type(key) in (int, str, unicode) or (type(key) == tuple and all( is_reprable_key(x) for x in key)) The question for this method is if repr itself is deterministic for the types that I have allowed here. I believe this would not survive the 2/3 version barrier due to the change in str/unicode literals. This also would not work for integers where 2**32 - 1 < x < 2**64 jumping between 32 and 64 bit platforms. Are there any other conditions (ie. do strings serialize differently under different conditions)? (If this all fails miserably then I can store the hash of the key along with the pickle of both the key and value, then iterate across rows that have a matching hash looking for one that unpickles to the expected key, but that really does complicate a few other things and I would rather not do it.) Any insights?

    Read the article

  • Using a Button to navigate to another Page in a NavigationWindow

    - by Will
    I'm trying to use the navigation command framework in WPF to navigate between Pages within a WPF application (desktop; not XBAP or Silverlight). I believe I have everything configured correctly, yet its not working. I build and run without errors, I'm not getting any binding errors in the Output window, but my navigation button is disabled. Here's the app.xaml for a sample app: <Application x:Class="Navigation.App" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" StartupUri="First.xaml"> </Application> Note the StartupUri points to First.xaml. First.xaml is a Page. WPF automatically hosts my page in a NavigationWindow. Here's First.xaml: <Page x:Class="Navigation.First" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="First"> <Grid> <Button CommandParameter="/Second.xaml" CommandTarget="{Binding RelativeSource= {RelativeSource FindAncestor, AncestorType={x:Type NavigationWindow}}}" Command="NavigationCommands.GoToPage" Content="Go!"/> </Grid> </Page> The button's CommandTarget is set to the NavigationWindow. The command is GoToPage, and the page is /Second.xaml. I've tried setting the CommandTarget to the containing Page, the CommandParameter to "Second.xaml" (First.xaml and Second.xaml are both in the root of the solution), and I've tried leaving the CommandTarget empty. I've also tried setting the Path to the Binding to various navigational-related public properties on the NavigationWindow. Nothing has worked so far. What am I missing here? I really don't want to do my navigation in code. Clarification. If, instead of using a button, I use a Hyperlink: <Grid> <TextBlock> <Hyperlink NavigateUri="Second.xaml">Go! </Hyperlink> </TextBlock> </Grid> everything works as expected. However, my UI requirements means that using a Hyperlink is right out. I need a big fatty button for people to press. That's why I want to use the button to navigate. I just want to know how I can get the Button to provide the same ability that the Hyperlink does in this case.

    Read the article

  • XmlHttpRequest bug?

    - by valdo
    Hello all. I'm writing a program that among other things needs to download a file given its URL. I'm too lazy to implement the Http/Https protocols manually, so that I needed some library/object/function that'll do the job. Critical requirement: The download must be asynchronous. That is, the thread that issued the download must be able to do something else "while" downloading the file, plus the download must be able to be aborted anytime without any barbaric side effects (such as internal call to TerminateThread). Nice-to-have requirements: Should be able to download the file "into memory". Means - read the contents of the file as they arrive, not necessarily save it into some "file system" file. It'd be nice to have some convenient Win32 progress notification mechanism (waitable event, semahpore, completion port, etc.), rather than just periodically polling the download status. I've chosen the XmlHttpRequest COM object to do the work. It seemed to work fine enough, plus it supported asynchronous mode. However I noticed that after some period it just stops working. That is, after several successful file downloads it stops downloading anything. I periodically poll it to get its status, it reports "in-progress", but nothing actually happens, and there's no network activity. Moreover, when the same process creates another instance of XmlHttpRequest object to perform new downloads - the effect is the same. The object reports "in progress", whereas it doesn't even try to connect to the server (according to network sniffers and system TCP state). The only way to make this object work back is to restart the process. This makes me suspect that there's a sort of a bug (sorry, I meant undocumented feature) in the object. Also it's not a bug at the level of an individual object, since the problem persists when the object is destroyed and another one is created. It's probably some global state of the DLL that implements this object. Does anyone know something about this? Is this a known bug? I'm pretty sure there's no chance that I have another bug in my code, because of which it seems to me to be the bug is in the XmlHttpRequest. I've done enoughtests and spent time with the debugger to conclude without reasonable doubt that it's just the object stops working. BTW, while the object should work, I do all the waiting via MsgWaitXXXX API calls. So that if this object needs the message loop to work properly (for instance, it may create a hidden notification window and bind it to a socket via WSAAsyncSelect) - I give it the opportunity.

    Read the article

  • Modular GWT design concerns

    - by GlGuru
    Hi, I have a couple of questions regarding a modular GWT based application framework. I have some ideas about them but being new to the field of web development I feel they are far from ideal. I'd appreciate a few comments and suggestions in this regard. Here are my questions: I am developing a framework which will allow third parties to embed GWT applications into our website and do some communication with them using simple iFrame postMessage. All these third party modules are going to use our SDK which is also GWT based. The problem arises that even though all the modules will be using the same codebase there is going to be a massive overheard in the amount of duplicate Javascript code (i.e. our common SDK code base which is quite large) being downloaded on the client's machine. This is highly redundant and problematic, not only due to the sheer size of duplicate code but, also due to the fact that subsequent updates of the SDK would require the modules to be recompiled which is going to create a DLL hell kind of scenario in the long run. What is the best way of doing this kind of thing? Is there a way where I can have some static GWT code (i.e. the SDK) and the dynamic GWT module refers to it (even if lies on a different domain) and it all work happily? The other part of the problem lies in providing robust scripting front end to the SDK. At first it appears to be trivial since Javascript itself is a scripting language. However, I do not know how to load and call a piece of pure Javascript code at runtime? I am willing to put restrictions on the target Javascript (i.e. having a function main and unique namespace or something). Furthermore the Javascript will come as a string from a database and not as a full URL. If its doable in Javascript how does one get this right in GWT i.e. forcing the compiler to emit a certain function in the generated Javascript? This I believe can be lesser of a problem by having a stub Javascript with all the right requirements which just loads up a GWT generated Javascript. Is any of this possible at all? I generally hate to be this verbose but I hope to find a quick solution to the problem as its holding up my progress. I'd highly appreciate any comments, suggestions and experiences.

    Read the article

  • PDF Report generation

    - by IniTech
    EDIT : I completed this project using ABCpdf. For anyone interested, I love this product and their support is A+. Everything I listed as a 'Con' for the HTML - PDF solution was easily doable in ABCpdf. I've been charged with creating a data driven pdf report. After reviewing the plethora of options, I have narrowed it down to 2. I need you all to to help me decide, or offer alternatives I haven't considered. Here are the requirements: 100% Data driven Eventually PDF (a stop in HTML is fine, so long as it is converted) Can be run with multiple sets of data (the layout is always the same, the data is variable) Contains normal analysis-style copy (saved in DB with html markup) Contains tables (data for tables is generated at run-time) Header/Page # on each page Table of Contents .NET (VB or C#) Done quickly Now, because of the fact that the report is going to be generated with multiple sets of data, I don't think a stamped pdf template will work since I won't know how long or how many pages a certain piece of the report could require. So, I think my best options are: Programmatic creation using an iText-like solution. Generate in HTML and convert to PDF using a third-party application (ABCPdf is the tool I have played with so far) Both solutions have their pro's and con's. Programmatic solution: Pros: Flexible Easy page numbering/page header/table of contents Free Cons: Time consuming (to write a layer on top of iText to do what I need and keep maintainable) Since the copy is already stored in the db with html markup, I would have to parse through the data before I place it into the pdf, ensuring I don't have to break the paragraph into chunks so I can apply bold, italic, underline, etc. to specific phrases. This seems like a huge PITA, and I hope I am wrong about that assumption. HTML - PDF Pros: Easy to generate from db (no parsing necessary) Many tools for conversion Uses technology I am already familiar with Built-in "Print Preview" - not a req, but nice Cons: (Edited after project completion. All of my assumptions were incorrect and ABCpdf is awesome) 1. Almost impossible to generate page headers - Not True 2. Very difficult to generate page numbers Not True 3. Nearly impossible to generate table of contents Not True 4. (Cross-browser support isn't a con; Since its internal, I can dictate what browser to use) 5. Conversion tool quirks - may not convert exactly as rendered in browser Not True 6. Overall, I think it would be very hard to format the HTML exactly as I would want it to appear/convert to PDF. Not True That's it - I need the communitys help in deciding which way I should go. I might be wrong about some of my Pro/Con assumptions. If I am, please tell me. All thoughts and suggestions are welcome and appreciated. Thanks

    Read the article

  • Migrating from hand-written persistence layer to ORM

    - by Sergey Mikhanov
    Hi community, We are currently evaluating options for migrating from hand-written persistence layer to ORM. We have a bunch of legacy persistent objects (~200), that implement simple interface like this: interface JDBC { public long getId(); public void setId(long id); public void retrieve(); public void setDataSource(DataSource ds); } When retrieve() is called, object populates itself by issuing handwritten SQL queries to the connection provided using the ID it received in the setter (this usually is the only parameter to the query). It manages its statements, result sets, etc itself. Some of the objects have special flavors of retrive() method, like retrieveByName(), in this case a different SQL is issued. Queries could be quite complex, we often join several tables to populate the sets representing relations to other objects, sometimes join queries are issued on-demand in the specific getter (lazy loading). So basically, we have implemented most of the ORM's functionality manually. The reason for that was performance. We have very strong requirements for speed, and back in 2005 (when this code was written) performance tests has shown that none of mainstream ORMs were that fast as hand-written SQL. The problems we are facing now that make us think of ORM are: Most of the paths in this code are well-tested and are stable. However, some rarely-used code is prone to result set and connection leaks that are very hard to detect We are currently squeezing some additional performance by adding caching to our persistence layer and it's a huge pain to maintain the cached objects manually in this setup Support of this code when DB schema changes is a big problem. I am looking for an advice on what could be the best alternative for us. As far as I know, ORMs has advanced in last 5 years, so it might be that now there's one that offers an acceptable performance. As I see this issue, we need to address those points: Find some way to reuse at least some of the written SQL to express mappings Have the possibility to issue native SQL queries without the necessity to manually decompose their results (i.e. avoid manual rs.getInt(42) as they are very sensitive to schema changes) Add a non-intrusive caching layer Keep the performance figures. Is there any ORM framework you could recommend with regards to that?

    Read the article

  • How do you remind your Scrum Product Owner about his promises/actions?

    - by Felix Ogg
    ** EDIT: Rephrased the question to re-focus ** Our Scrum team meets as seldomly as possible, but we meet with the product owner every chance we get. We track everyone's agreed action points (particularly theirs). We are 100% agile, but our product owner lives in traditional world, we remain off-site. We facilitate him in crossing over to our fast-paced world. There's not much wrong. The team and the PO are in good spirits. PO is present at every meeting and positively energized. Just imagine this person as a 70 year old, slow grandpa, who is forgetful, yet kind. In reality he isn't, but he is used to a working environment (public servants) that is much slooooower. Manyana-manyana etc. It is frustrating for my team to cooperate: PO lives in a non-prioritized environment, and everyone in it has learned the productivity-technique of NGTD (Not Getting Things Done). He WANTS to, it's just that he forgets or 'sinks' somewhere along the away. We have experimented with a text file, maintained by the Scrum master (low-tech), which he broadcasts by e-mail every day JIRA, our issue tracker. Turns out this is nice for programmers, but too steep for 'regular people' I Googled for Issue tracking webtools but came up empty handed: All tools are aimed at IT issue tracking, instead of meeting action point tracking/planning for mere mortals. I did find TODO-lists like RememberTheMilk, but they don't track comments, and - to be honest - I doubt we could get our product owner to use it (too complicated). We have three requirements: Register action points, assign to a team member and a deadline Offer anyone to 'comment' on progress of any action point Do not build our own tool from scratch We do not need: - impressive authorization models, - multi-project, - workflow, - crosslinking. Is there any trick/tool you use to assist your product owner 'fly' like the rest of the rest of the team? Communication before tools I agree with the general consensus that one should not try to apply technology to a communication problem, however in this case I am merely looking for a tool to save me time in setting up prioritized lists. I found www.thymer.com today, may be what I am looking for. The guys are cool. It is getting rather feature-bloated though.

    Read the article

  • Creating AST for shared and local variables

    - by Rizwan Abbasi
    Here is my grammar grammar simulator; options { language = Java; output = AST; ASTLabelType=CommonTree; } //imaginary tokens tokens{ SHARED; LOCALS; BOOL; RANGE; ARRAY; } parse : declaration+ ; declaration :variables ; variables : locals ; locals : (bool | range | array) ; bool :ID 'in' '[' ID ',' ID ']' ('init' ID)? -> ^(BOOL ID ID ID ID?) ; range : ID 'in' '[' INT '..' INT ']' ('init' INT)? -> ^(RANGE ID INT INT INT?) ; array :ID 'in' 'array' 'of' '[' INT '..' INT ']' ('init' INT)? -> ^(ARRAY ID INT INT INT?) ; ID : (('a'..'z' | 'A'..'Z'|'_')('a'..'z' | 'A'..'Z'|'0'..'9'|'_'))* ; INT : ('0'..'9')+ ; WHITESPACE : ('\t' | ' ' | '\r' | '\n' | '\u000C')+ {$channel = HIDDEN;} ; INPUT flag in [down, up] init down pc in [0..7] init 0 CA in array of [0..5] init 0 AST It is having a small problem. Variables (bool, range or array) can be of two abstract types 1. locals (each object will have it's own variable) 2. shared (think of static in java, same for all object) Now the requirements are changed. I want the user to input like this NEW INPUT domains: upDown [up,down] possibleStates [0-7] booleans [true,false] locals: pc in possibleStates init 0 flag in upDown init down flag1 in upDown init down map in array of booleans init false shared: pcs in possibleStates init 0 flag in upDown init down flag1 in upDown init down maps in array of booleans init false Again, all the variables can be of two types (of any domain sepecified) 1. Local 2. Shared In Domains: upDown [up,down] possibleStates [0-7] upDown, up, down and possibleStates are of type ID (ID is defined in my above grammar), 0 and 7 are of type INT Can any body help me how to convert my current grammar to meet new specifications.

    Read the article

  • Displaying a notification when bluetooth is disconnected - Android

    - by Ryan T
    I am trying to create a program that will display a notification to the user if a Blue tooth device suddenly comes out of range from my Android device. I currently have the following code but no notification is displayed. I was wondering if it was possible I shouldn't use ACTION_ACL_DISCONNECTED because I believe the bluetooth stack would be expecting packets that state a disconnect is requested. My requirements state that the bluetooth device will disconnect without warning. Thank you for any assistance! BluetoothNotification.java: //This is where the notification is created. import android.app.Activity; import android.app.Notification; import android.app.NotificationManager; import android.app.PendingIntent; import android.content.Context; import android.content.Intent; import android.os.Bundle; import android.app.Activity; import android.app.Notification; import android.app.NotificationManager; import android.app.PendingIntent; import android.content.Context; import android.content.Intent; import android.os.Bundle; public class BluetoothNotification extends Activity { public static final int NOTIFICATION_ID = 1; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); /** Define configuration for our notification */ int icon = R.drawable.logo; CharSequence tickerText = "This is a sample notification"; long when = System.currentTimeMillis(); Context context = getApplicationContext(); CharSequence contentTitle = "Sample notification"; CharSequence contentText = "This notification has been generated as a result of BT Disconnecting"; Intent notificationIntent = new Intent(this, BluetoothNotification.class); PendingIntent contentIntent = PendingIntent.getActivity(this, 0, notificationIntent, 0); /** Initialize the Notification using the above configuration */ final Notification notification = new Notification(icon, tickerText, when); notification.setLatestEventInfo(context, contentTitle, contentText, contentIntent); /** Retrieve reference from NotificationManager */ String ns = Context.NOTIFICATION_SERVICE; final NotificationManager mNotificationManager = (NotificationManager) getSystemService(ns); mNotificationManager.notify(NOTIFICATION_ID, notification); finish(); } } Snippet from OnCreate: //Located in Controls.java IntentFilter filter1 = new IntentFilter(BluetoothDevice.ACTION_ACL_DISCONNECTED); this.registerReceiver(mReceiver, filter1); Snippet from Controls.java: private final BroadcastReceiver mReceiver = new BroadcastReceiver() { @Override public void onReceive(Context context, Intent intent) { String action = intent.getAction(); BluetoothDevice device = intent.getParcelableExtra(BluetoothDevice.EXTRA_DEVICE); if (BluetoothDevice.ACTION_ACL_DISCONNECTED.equals(action)) { //Device has disconnected NotificationManager nm = (NotificationManager) getSystemService(NOTIFICATION_SERVICE); } } };

    Read the article

  • How do I create a thread-safe write-once read-many value in Java?

    - by Software Monkey
    This is a problem I encounter frequently in working with more complex systems and which I have never figured out a good way to solve. It usually involves variations on the theme of a shared object whose construction and initialization are necessarily two distinct steps. This is generally because of architectural requirements, similar to applets, so answers that suggest I consolidate construction and initialization are not useful. By way of example, let's say I have a class that is structured to fit into an application framework like so: public class MyClass { private /*ideally-final*/ SomeObject someObject; MyClass() { someObject=null; } public void startup() { someObject=new SomeObject(...arguments from environment which are not available until startup is called...); } public void shutdown() { someObject=null; // this is not necessary, I am just expressing the intended scope of someObject explicitly } } I can't make someObject final since it can't be set until startup() is invoked. But I would really like it to reflect it's write-once semantics and be able to directly access it from multiple threads, preferably avoiding synchronization. The idea being to express and enforce a degree of finalness, I conjecture that I could create a generic container, like so: public class WoRmObject<T> { private T object; WoRmObject() { object=null; } public WoRmObject set(T val) { object=val; return this; } public T get() { return object; } } and then in MyClass, above, do: private final WoRmObject<SomeObject> someObject; MyClass() { someObject=new WoRmObject<SomeObject>(); } public void startup() { someObject.set(SomeObject(...arguments from environment which are not available until startup is called...)); } Which raises some questions for me: Is there a better way, or existing Java object (would have to be available in Java 4)? Is this thread-safe provided that no other thread accesses someObject.get() until after it's set() has been called. The other threads will only invoke methods on MyClass between startup() and shutdown() - the framework guarantees this. Given the completely unsynchronized WoRmObject container, it is ever possible under either JMM to see a value of object which is neither null nor a reference to a SomeObject? In other words, does has the JMM always guaranteed that no thread can observe the memory of an object to be whatever values happened to be on the heap when the object was allocated.

    Read the article

  • ajaxSubmit and Other Code. Can someone help me determine what this code is doing?

    - by Matt Dawdy
    I've inherited some code that I need to debug. It isn't working at present. My task is to get it to work. No other requirements have been given to me. No, this isn't homework, this is a maintenance nightmare job. ASP.Net (framework 3.5), C#, jQury 1.4.2. This project makes heavy use of jQuery and AJAX. There is a drop down on a page that, when an item is chosen, is supposed to add that item (it's a user) to an object in the database. To accomplish this, the previous programmer first, on page load, dynamically loads the entire page through AJAX. To do this, he's got 5 div's, and each one is loaded from a jquery call to a different full page in the website. Somehow, the HTML and BODY and all the other stuff is stripped out and the contents of the div are loaded with the content of the aspx page. Which seems incredibly wrong to me since it relies on the browser to magically strip out html, head, body, form tags and merge with the existing html head body form tags. Also, as the "content" page is returned as a string, the previous programmer has this code running on it before it is appended to the div: function CleanupResponseText(responseText, uniqueName) { responseText = responseText.replace("theForm.submit();", "SubmitSubForm(theForm, $(theForm).parent());"); responseText = responseText.replace(new RegExp("theForm", "g"), uniqueName); responseText = responseText.replace(new RegExp("doPostBack", "g"), "doPostBack" + uniqueName); return responseText; } When the dropdown itself fires it's onchange event, here is the code that gets fired: function SubmitSubForm(form, container) { //ShowLoading(container); $(form).ajaxSubmit( { url: $(form).attr("action"), success: function(responseText) { $(container).html(CleanupResponseText(responseText, form.id)); $("form", container).css("margin-top", "0").css("padding-top", "0"); //HideLoading(container); } } ); } This blows up in IE, with the message that "Microsoft JScript runtime error: Object doesn't support this property or method" -- which, I think, has to be that $(form).ajaxSubmit method doesn't exist. What is this code really trying to do? I am so turned around right now that I think my only option is to scrap everything and start over. But I'd rather not do that unless necessary. Is this code good? Is it working against .Net, and is that why we are having issues?

    Read the article

  • DB Design Pattern - Many to many classification / categorised tagging.

    - by Robin Day
    I have an existing database design that stores Job Vacancies. The "Vacancy" table has a number of fixed fields across all clients, such as "Title", "Description", "Salary range". There is an EAV design for "Custom" fields that the Clients can setup themselves, such as, "Manager Name", "Working Hours". The field names are stored in a "ClientText" table and the data stored in a "VacancyClientText" table with VacancyId, ClientTextId and Value. Lastly there is a many to many EAV design for custom tagging / categorising the vacancies with things such as Locations/Offices the vacancy is in, a list of skills required. This is stored as a "ClientCategory" table listing the types of tag, "Locations, Skills", a "ClientCategoryItem" table listing the valid values for each Category, e.g., "London,Paris,New York,Rome", "C#,VB,PHP,Python". Finally there is a "VacancyClientCategoryItem" table with VacancyId and ClientCategoryItemId for each of the selected items for the vacancy. There are no limits to the number of custom fields or custom categories that the client can add. I am now designing a new system that is very similar to the existing system, however, I have the ability to restrict the number of custom fields a Client can have and it's being built from scratch so I have no legacy issues to deal with. For the Custom Fields my solution is simple, I have 5 additional columns on the Vacancy Table called CustomField1-5. This removes one of the EAV designs. It is with the tagging / categorising design that I am struggling. If I limit a client to having 5 categories / types of tag. Should I create 5 tables listing the possible values "CustomCategoryItems1-5" and then an additional 5 many to many tables "VacancyCustomCategoryItem1-5" This would result in 10 tables performing the same storage as the three tables in the existing system. Also, should (heaven forbid) the requirements change in that I need 6 custom categories rather than 5 then this will result in a lot of code change. Therefore, can anyone suggest any DB Design Patterns that would be more suitable to storing such data. I'm happy to stick with the EAV approach, however, the existing system has come across all the usual performance issues and complex queries associated with such a design. Any advice / suggestions are much appreciated. The DBMS system used is SQL Server 2005, however, 2008 is an option if required for any particular pattern.

    Read the article

  • How do I connect multiple sortable lists to each other in jQuery UI?

    - by Abs
    I'm new to jQuery, and I'm totally struggling with using jQuery UI's sortable. I'm trying to put together a page to facilitate grouping and ordering of items. My page has a list of groups, and each group contains a list of items. I want to allow users to be able to do the following: 1. Reorder the groups 2. Reorder the items within the groups 3. Move the items between the groups The first two requirements are no problem. I'm able to sort them just fine. The problem comes in with the third requirement. I just can't connect those lists to each other. Some code might help. Here's the markup. <ul id="groupsList" class="groupsList"> <li id="group1" class="group">Group 1 <ul id="groupItems1" class="itemsList"> <li id="item1-1" class="item">Item 1.1</li> <li id="item1-2" class="item">Item 1.2</li> </ul> </li> <li id="group2" class="group">Group 2 <ul id="groupItems2" class="itemsList"> <li id="item2-1" class="item">Item 2.1</li> <li id="item2-2" class="item">Item 2.2</li> </ul> </li> <li id="group3" class="group">Group 3 <ul id="groupItems3" class="itemsList"> <li id="item3-1" class="item">Item 3.1</li> <li id="item3-2" class="item">Item 3.2</li> </ul> </li> </ul> I was able to sort the lists by putting $('#groupsList').sortable({}); and $('.itemsList').sortable({}); in the document ready function. I tried using the connectWith option for sortable to make it work, but I failed spectacularly. What I'd like to do is have the every groupItemsX list connected to every groupItemsX list but itself. How should I do that?

    Read the article

  • XML serialization options in .NET

    - by Borek
    I'm building a service that returns an XML (no SOAP, no ATOM, just plain old XML). Say that I have my domain objects already filled with data and just need to transform them to the XML format. What options do I have on .NET? Requirements: The transformation is not 1:1. Say that I have an Address property of type Address with nested properties like Line1, City, Postcode etc. This may need to result in an XML like <xaddr city="...">Line1, Postcode</xaddr>, i.e. quite different. Some XML elements/attributes are conditional, for example, if a Customer is under 18, the XML needs to contain some additional information. I only need to serialize the objects to XML, the other direction (XML to objects) is not important Some technologies, i.e. Data Contracts use .NET attributes. Other means of configuration (external XML config, buddy classes etc.) would be a plus. Here are the options as I see them as the moment. Corrections / additions will be very welcome. String concatenation - forget it, it was a joke :) Linq 2 XML - complete control but quite a lot of hand written code, would need good suite of unit tests View engines in ASP.NET MVC (or even Web Forms theoretically), the logic being in controllers. It's a question how to structure it, I can have simple rules engine in my controller(s) and one view template per each possible output, or have the decision logic directly in the template. Both have upsides and downsides. XML Serialization - I'm not sure about the flexibility here Data Contracts from WCF - not sure about the flexibility either, plus would they work in a simple ASP.NET MVC app (non-WCF service)? Are they a super-set of the standard XML serialization now? If it exists, some XML-to-object mapper. The more I think about it the more I think I'm looking for something like this but I couldn't find anything appropriate. Any comments / other options?

    Read the article

  • Circular database relationships. Good, Bad, Exceptions?

    - by jim
    I have been putting off developing this part of my app for sometime purely because I want to do this in a circular way but get the feeling its a bad idea from what I remember my lecturers telling me back in school. I have a design for an order system, ignoring the everything that doesn't pertain to this example I'm left with: CreditCard Customer Order I want it so that, Customers can have credit cards (0-n) Customers have orders (1-n) Orders have one customer(1-1) Orders have one credit card(1-1) Credit cards can have one customer(1-1) (unique ids so we can ignore uniqueness of cc number, husband/wife may share cc instances ect) Basically the last part is where the issue shows up, sometimes credit cards are declined and they wish to use a different one, this needs to update which their 'current' card is but this can only change the current card used for that order, not the other orders the customer may have on disk. Effectively this creates a circular design between the three tables. Possible solutions: Either Create the circular design, give references: cc ref to order, customer ref to cc customer ref to order or customer ref to cc customer ref to order create new table that references all three table ids and put unique on the order so that only one cc may be current to that order at any time Essentially both model the same design but translate differently, I am liking the latter option best at this point in time because it seems less circular and more central. (If that even makes sense) My questions are, What if any are the pros and cons of each? What is the pitfalls of circular relationships/dependancies? Is this a valid exception to the rule? Is there any reason I should pick the former over the latter? Thanks and let me know if there is anything you need clarified/explained. --Update/Edit-- I have noticed an error in the requirements I stated. Basically dropped the ball when trying to simplify things for SO. There is another table there for Payments which adds another layer. The catch, Orders can have multiple payments, with the possibility of using different credit cards. (if you really want to know even other forms of payment). Stating this here because I think the underlying issue is still the same and this only really adds another layer of complexity.

    Read the article

< Previous Page | 142 143 144 145 146 147 148 149 150 151 152 153  | Next Page >