Search Results

Search found 2484 results on 100 pages for 'maintain'.

Page 74/100 | < Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >

  • Synchronizing an ERWin model with a Visual Studio 2008 GDR 2/2010 db project

    - by Grant Back
    I am looking for options to get our vast collection of DB objects across many DBs into source control (TFS 2010). Once we succeed here, we will work toward generating our alter scripts for a particular DB change via TFS build. The problem is, our data architecture group is responsible for maintaining the DB objects (excluding SPs), and they work within a model centric process, via ERWin. What this means, is that they maintain the DBs via ERWin models, and generate alters from them that are used to release changes. In order to achieve our goal of getting the DB objects (not just the ERWin models) into TFS, I believe the best option is to do this via Visual Studio DB projects. From what I can tell, there is very little urgency for CA to continue supporting an integration between ERWin and Visual Studio, that no longer works as of Visual Studio 2008 DB Ed. GDR. If I have been mislead in this regard, please feel free to set me straight. One potential solution is to: Perform changes in the ERWin model. Take the alter script generated from ERWin, and import the script into the appropriate Visual Studio DB project, updating the objects in the in the DB project Check the changed objects in the DB project into TFS. TFS Build executes to generate the alter scripts that will be used to push the changes through our release process. My question is, is this solution viable, or are there any other options?

    Read the article

  • To Ajax or Not to Ajax a listing page

    - by kaivalya
    Here i am talking about product listing pages where there are multiple filters that filter the list of products appearing on the page like product types, categories price range etc. I have done such pages using both ajax and no ajax way in the past. What I like about using ajax in such page is that, when filters are selected I only update the section that contains the product list. There is no need to refresh the whole page which could end up re-loading the images on top bar, banners etc and slow down the user performance. Ajax way in my opinion becomes more compact and responsive from user experience. Down side for ajax route for me is; since filter states are not maintained in the URL I end up maintaining them on the server. This becomes complicated if I want to handle multi window scenarios and it is also costly to maintain such state on server memory for each session. Not using ajax and simply keeping all filter values on url and refreshing the page is quite simple but the luxury of refreshing only the pane that really needs to be refreshed is lost. Lately I am seeing a lot of large scale e-commerce sites that are using non-ajax approach on their listing pages and this is making me question one more time if it might be more efficient to build non-ajax listing make due to the long term maintenance ease and sacrifice a little bit from user experience. I am about to start implementing a new listing page for a product which I have the flexibility to go either way and I would appreciate your inputs.

    Read the article

  • Advice needed on best and most efficient practices with developing google apps application...

    - by Ali
    Hi guys , I'm getting my feet wet with developing my order management applications for integration with google apps. However there are certain aspects I need to take into consideration prior to proceeding any further. My application is such that it would upload documents to google documents and store contacts in google contacts. It requires such that a single order can have a number of uploaded documents associated with it as well as some contacts associated with it. MY question however is what would be the most efficient way to implement this. I could keep key tables for both contacts and documents which woudl contain just an ID and link to the documents/contacts or their respective identification id on google. Or I could maintain an exact replica of the information on my own database as well as a link to the contact on google. However won't that be too redundant. I don't want my application to be really slow as I'm afraid that everytime I make a call to google docs to retrieve a list of documents or google contacts it would be really slow on my application - or am I getting worried for no reason? Any advice would be most appreciated.

    Read the article

  • Google App Engine on Google Apps Domain

    - by Bob Ralian
    I'm having trouble getting my domain pointed to my website hosted with google app engine. Here's the background... take care to separate the concepts of "google apps" (domain hosting, email, etc.) and "google app engine" (website framework). I have a domain that's using Google Apps for Your Domain, let's call it company.com. So my login for my google apps account is [email protected]. I have a different domain that is aliased back to my google apps account, let's call it mycompany.com. It's been successfully aliased and registered with my primary google apps account using the cname method, and has updated mx records. We have a ton of domains, and I only want to use one "google apps" account to maintain them all. Now I have a website I've built using google app engine, and the url is effectively mycompany.appspot.com. I want to get mycompany.com to point to my website that currently resides at mycompany.appspot.com. There's a spot in the google app engine dashboard under application settings where you can add a domain. So I click there and enter mycompany.com and I get an error message saying that domain is not using google apps. If I back up to the page I submitted, there's a note saying I need to register the domain with google apps. So I click the link to do that and enter mycompany.com and I get an error message saying the domain has been registered and is in the process of ownership verification. But that process is already finished. So... what do I do? Does google app engine not support a domain that is only aliased to a primary google apps account? Does mycompany.com need to have its own primary google apps account?

    Read the article

  • How does lucene index documents?

    - by Mehdi Amrollahi
    Hello, I read some document about Lucene; also I read the document in this link (http://lucene.sourceforge.net/talks/pisa). I don't really understand how Lucene indexes documents and don't understand which algorithms Lucene uses for indexing? On the above link, it says Lucene uses this algorithm for indexing: incremental algorithm: maintain a stack of segment indices create index for each incoming document push new indexes onto the stack let b=10 be the merge factor; M=8 for (size = 1; size < M; size *= b) { if (there are b indexes with size docs on top of the stack) { pop them off the stack; merge them into a single index; push the merged index onto the stack; } else { break; } } How does this algorithm provide optimized indexing? Does Lucene use B-tree algorithm or any other algorithm like that for indexing - or does it have a particular algorithm? Thank you for reading my post.

    Read the article

  • Dealing with asynchronous control structures (Fluent Interface?)

    - by Christophe Herreman
    The initialization code of our Flex application is doing a series of asynchronous calls to check user credentials, load external data, connecting to a JMS topic, etc. Depending on the context the application runs in, some of these calls are not executed or executed with different parameters. Since all of these calls happen asynchronously, the code controlling them is hard to read, understand, maintain and test. For each call, we need to have some callback mechanism in which we decide what call to execute next. I was wondering if anyone had experimented with wrapping these calls in executable units and having a Fluent Interface (FI) that would connect and control them. From the top of my head, the code might look something like: var asyncChain:AsyncChain = execute(LoadSystemSettings) .execute(LoadAppContext) .if(IsAutologin) .execute(AutoLogin) .else() .execute(ShowLoginScreen) .etc; asyncChain.execute(); The AsyncChain would be an execution tree, build with the FI (and we could of course also build one without a FI). This might be an interesting idea for environments that run in a single threaded model like the Flash Player, Silverlight, JavaFX?, ... Before I dive into the code to try things out, I was hoping to get some feedback.

    Read the article

  • Refactoring Bloated ViewModel

    - by Holy Christ
    Hi, I am writing a PRISM/MVVM/WPF application. It's a LOB application, so there are a lot of complicated rules. I've noticed the View Model is starting to get bloated. There are two main issues. One is that to maintain MVVM, I'm doing a lot of things that feel hacky like adding a bunch of properties to my VM. The view binds to those properties to keep track of what feels like view specific information. For example, a boolean keeping track of the status of a long running process in the VM, so the view can disable some of its controls while the long running process is working. I've read that this issue could be solved with Attached Behaviors. I'll look more into that. In the example MVVM apps you see online, this isn't a big deal because they are over-simplified. The other issue is the number of commands in my VM. Right now there are four commands. I'm defining the commands in the VM using Josh Smith's RelayCommand (basically the DelegateCommand in PRISM) so all the business logic lives in the VM. I considered moving each command into separate unit of works. I'm not sure the best way to do this. Which patterns are you guys using to keep your VMs clean? I can already feel someone responding with "your view and VM is too complicated, you should break them into many view/VMs". It is certainly not too complicated from a Ux perspective - there are 2 buttons, a combobox, and a listbox. Also, from a logical perspective, it is one cohesive domain. Having said that, I'm very interested in hearing how others are dealing with this type of issue. Thanks for your input.

    Read the article

  • To (monkey)patch or not to (monkey)patch, that is the question

    - by gsakkis
    I was talking to a colleague about one rather unexpected/undesired behavior of some package we use. Although there is an easy fix (or at least workaround) on our end without any apparent side effect, he strongly suggested extending the relevant code by hard patching it and posting the patch upstream, hopefully to be accepted at some point in the future. In fact we maintain patches against specific versions of several packages that are applied automatically on each new build. The main argument is that this is the right thing to do, as opposed to an "ugly" workaround or a fragile monkey patch. On the other hand, I favor practicality over purity and my general rule of thumb is that "no patch" "monkey patch" "hard patch", at least for anything other than a (critical) bug fix. So I'm wondering if there is a consensus on when it's better to (hard) patch, monkey patch or just try to work around a third party package that doesn't do exactly what one would like. Does it have mainly to do with the reason for the patch (e.g. fixing a bug, modifying behavior, adding missing feature), the given package (size, complexity, maturity, developer responsiveness), something else or there are no general rules and one should decide on a case-by-case basis ?

    Read the article

  • What is the most platform- and Python-version-independent way to make a fast loop for use in Python?

    - by Statto
    I'm writing a scientific application in Python with a very processor-intensive loop at its core. I would like to optimise this as far as possible, at minimum inconvenience to end users, who will probably use it as an uncompiled collection of Python scripts, and will be using Windows, Mac, and (mainly Ubuntu) Linux. It is currently written in Python with a dash of NumPy, and I've included the code below. Is there a solution which would be reasonably fast which would not require compilation? This would seem to be the easiest way to maintain platform-independence. If using something like Pyrex, which does require compilation, is there an easy way to bundle many modules and have Python choose between them depending on detected OS and Python version? Is there an easy way to build the collection of modules without needing access to every system with every version of Python? Does one method lend itself particularly to multi-processor optimisation? (If you're interested, the loop is to calculate the magnetic field at a given point inside a crystal by adding together the contributions of a large number of nearby magnetic ions, treated as tiny bar magnets. Basically, a massive sum of these.) # calculate_dipole # ------------------------- # calculate_dipole works out the dipole field at a given point within the crystal unit cell # --- # INPUT # mu = position at which to calculate the dipole field # r_i = array of atomic positions # mom_i = corresponding array of magnetic moments # --- # OUTPUT # B = the B-field at this point def calculate_dipole(mu, r_i, mom_i): relative = mu - r_i r_unit = unit_vectors(relative) #4pi / mu0 (at the front of the dipole eqn) A = 1e-7 #initalise dipole field B = zeros(3,float) for i in range(len(relative)): #work out the dipole field and add it to the estimate so far B += A*(3*dot(mom_i[i],r_unit[i])*r_unit[i] - mom_i[i]) / sqrt(dot(relative[i],relative[i]))**3 return B

    Read the article

  • Does anyone else get worn out using Scrum, finishing sprint after sprint?

    - by Simucal
    I'm with a pretty small startup and we started using a form of a Scrum/Agile development cycle. In many ways I enjoy Scrum. We have relatively short sprints (2 weeks) and I like the Burndown Chart to track the teams progress. I also like the Feature Board so I always know what I should be doing next. It feels good taking down a feature's card from the board, completing it and then putting it in the burn down pile. However, we are now entering in our 18th Sprint release cycle and I'm starting to feel a little burnt out. It isn't that I don't like job or my co-workers, it is just that these sprints are... well, sprints. From start to finish I literally feel like I'm racing against the clock to maintain our development velocity. When we are done with the sprint we spend one day planning the next sprints feature set and estimates and then off we go again. For people who work in a mature Agile/Scrum development process, is this normal? Or are we missing something? Is there normally time in a Scrum enviornment that is unassigned/untracked to get done some minor things and to clear your head?

    Read the article

  • how to make a explorer like logic in Obj C for iphone

    - by Ekra
    Hi friends, To make the query simple first we can take example:- If we open the explorer in our Windows desktop how it shows us the tree. As we keep on clicking the tree it get expanded and shows the file in it. The same way I want to show a explorer which gets expanded as the user clicks on it(I want to get the business logic the UI is simple). The information of the explorer i.e. which folder has what files come in a XML format to me. I have 2 options of getting the XML either I can get the whole XML at one query(but I guess this might slow the application if the XML is quite big). OR I can get the XML for every list like first only the root structure then if user clicks on any folder in root I can get the list(the files or folders inside that folder) of that particular folder. Now the question is What would be the approach to implement both the method and which would be the best. Should I need to create some dictionary to maintain the link of the files like which file is inside what. I am not able to get as how I would be able to link all the files. Any hint or direction would be highly appreciated. Thanks in advance

    Read the article

  • Machine Learning Algorithm for Predicting Order of Events?

    - by user213060
    Simple machine learning question. Probably numerous ways to solve this: There is an infinite stream of 4 possible events: 'event_1', 'event_2', 'event_4', 'event_4' The events do not come in in completely random order. We will assume that there are some complex patterns to the order that most events come in, and the rest of the events are just random. We do not know the patterns ahead of time though. After each event is received, I want to predict what the next event will be based on the order that events have come in in the past. The predictor will then be told what the next event actually was: Predictor=new_predictor() prev_event=False while True: event=get_event() if prev_event is not False: Predictor.last_event_was(prev_event) predicted_event=Predictor.predict_next_event(event) The question arises of how long of a history that the predictor should maintain, since maintaining infinite history will not be possible. I'll leave this up to you to answer. The answer can't be infinte though for practicality. So I believe that the predictions will have to be done with some kind of rolling history. Adding a new event and expiring an old event should therefore be rather efficient, and not require rebuilding the entire predictor model, for example. Specific code, instead of research papers, would add for me immense value to your responses. Python or C libraries are nice, but anything will do. Thanks! Update: And what if more than one event can happen simultaneously on each round. Does that change the solution?

    Read the article

  • Custom Database integration with MOSS 2007

    - by Bob
    Hopefully someone has been down this road before and can offer some sound advice as far as which direction I should take. I am currently involved in a project in which we will be utilizing a custom database to store data extracted from excel files based on pre-established templates (to maintain consistency). We currently have a process (written in C#.Net 2008) that can extract the necessary data from the spreadsheets and import it into our custom database. What I am primarily interested in is figuring out the best method for integrating that process with our portal. What I would like to do is let SharePoint keep track of the metadata about the spreadsheet itself and let the custom database keep track of the data contained within the spreadsheet. So, one thing I need is a way to link spreadsheets from SharePoint to the custom database and vice versa. As these spreadsheets will be updated periodically, I need tried and true way of ensuring that the data remains synchronized between SharePoint and the custom database. I am also interested in finding out how to use the data from the custom database to create reports within the SharePoint portal. Any and all information will be greatly appreciated.

    Read the article

  • Beginner problems with references to arrays in python 3.1.1

    - by Protean
    As part of the last assignment in a beginner python programing class, I have been assigned a traveling sales man problem. I settled on a recursive function to find each permutation and the sum of the distances between the destinations, however, I am have a lot of problems with references. Arrays in different instances of the Permute and Main functions of TSP seem to be pointing to the same reference. from math import sqrt class TSP: def __init__(self): self.CartisianCoordinates = [['A',[1,1]], ['B',[2,2]], ['C',[2,1]], ['D',[1,2]], ['E',[3,3]]] self.Array = [] self.Max = 0 self.StoredList = ['',0] def Distance(self, i1, i2): x1 = self.CartisianCoordinates[i1][1][0] y1 = self.CartisianCoordinates[i1][1][1] x2 = self.CartisianCoordinates[i2][1][0] y2 = self.CartisianCoordinates[i2][1][1] return sqrt(pow((x2 - x1), 2) + pow((y2 - y1), 2)) def Evaluate(self): temparray = [] Data = [] for i in range(len(self.CartisianCoordinates)): Data.append([]) for i1 in range(len(self.CartisianCoordinates)): for i2 in range(len(self.CartisianCoordinates)): if i1 != i2: temparray.append(self.Distance(i1, i2)) else: temparray.append('X') Data[i1] = temparray temparray = [] self.Array = Data self.Max = len(Data) def Permute(self,varray,index,vcarry,mcarry): #Problem Class array = varray[:] carry = vcarry[:] for i in range(self.Max): print ('ARRAY:', array) print (index,i,carry,array[index][i]) if array[index][i] != 'X': carry[0] += self.CartisianCoordinates[i][0] carry[1] += array[index][i] if len(carry) != self.Max: temparray = array[:] for j in range(self.Max):temparray[j][i] = 'X' index = i mcarry += self.Permute(temparray,index,carry,mcarry) else: return mcarry print ('pass',mcarry) return mcarry def Main(self): out = [] self.Evaluate() for i in range(self.Max): array = self.Array[:] #array appears to maintain the same reference after each copy, resulting in an incorrect array being passed to Permute after the first iteration. print (self.Array[:]) for j in range(self.Max):array[j][i] = 'X' print('I:', i, array) out.append(self.Permute(array,i,[str(self.CartisianCoordinates[i][0]),0],[])) return out SalesPerson = TSP() print(SalesPerson.Main()) It would be greatly appreciated if you could provide me with help in solving the reference problems I am having. Thank you.

    Read the article

  • Database design for invoices, invoice lines & revisions

    - by FreshCode
    I'm designing the 2nd major iteration of a relational database for a franchise's CRM (with lots of refactoring) and I need help on the best database design practices for storing job invoices and invoice lines with a strong audit trail of any changes made to each invoice. Current schema Invoices Table InvoiceId (int) // Primary key JobId (int) StatusId (tinyint) // Pending, Paid or Deleted UserId (int) // auditing user Reference (nvarchar(256)) // unique natural string key with invoice number Date (datetime) Comments (nvarchar(MAX)) InvoiceLines Table LineId (int) // Primary key InvoiceId (int) // related to Invoices above Quantity (decimal(9,4)) Title (nvarchar(512)) Comment (nvarchar(512)) UnitPrice (smallmoney) Revision schema InvoiceRevisions Table RevisionId (int) // Primary key InvoiceId (int) JobId (int) StatusId (tinyint) // Pending, Paid or Deleted UserId (int) // auditing user Reference (nvarchar(256)) // unique natural string key with invoice number Date (datetime) Total (smallmoney) Schema design considerations 1. Is it sensible to store an invoice's Paid or Pending status? All payments received for an invoice are stored in a Payments table (eg. Cash, Credit Card, Cheque, Bank Deposit). Is it meaningful to store a "Paid" status in the Invoices table if all the income related to a given job's invoices can be inferred from the Payments table? 2. How to keep track of invoice line item revisions? I can track revisions to an invoice by storing status changes along with the invoice total and the auditing user in an invoice revision table (see InvoiceRevisions above), but keeping track of an invoice line revision table feels hard to maintain. Thoughts? 3. Tax How should I incorporate sales tax (or 14% VAT in SA) when storing invoice data?

    Read the article

  • How to Create Deterministic Guids

    - by desigeek
    In our application we are creating Xml files with an attribute that has a Guid value. This value needed to be consistent between file upgrades. So even if everything else in the file changes, the guid value for the attribute should remain the same. One obvious solution was to create a static dictionary with the filename and the Guids to be used for them. Then whenever we generate the file, we look up the dictionary for the filename and use the corresponding guid. But this is not feasible coz we might scale to 100's of files and didnt want to maintain big list of guids. So another approach was to make the Guid the same based on the path of the file. Since our file paths and application directory structure are unique, the Guid should be unique for that path. So each time we run an upgrade, the file gets the same guid based on its path. I found one cool way to generate such 'Deterministic Guids' (Thanks Elton Stoneman). It basically does this: private Guid GetDeterministicGuid(string input) { //use MD5 hash to get a 16-byte hash of the string: MD5CryptoServiceProvider provider = new MD5CryptoServiceProvider(); byte[] inputBytes = Encoding.Default.GetBytes(input); byte[] hashBytes = provider.ComputeHash(inputBytes); //generate a guid from the hash: Guid hashGuid = new Guid(hashBytes); return hashGuid; } So given a string, the Guid will always be the same. Are there any other approaches or recommended ways to doing this? What are the pros or cons of that method?

    Read the article

  • Validation without ServiceLocator

    - by Dmitriy Nagirnyak
    Hi, I am getting back again and again to it thinking about the best way to perform validation on POCO objects that need access to some context (ISession in NH, IRepository for example). The only option I still can see is to use S*ervice Locator*, so my validation would look like: public User : ICanValidate { public User() {} // We need this constructor (so no context known) public virtual string Username { get; set; } public IEnumerable<ValidationError> Validate() { if (ServiceLocator.GetService<IUserRepository>().FindUserByUsername(Username) != null) yield return new ValidationError("Username", "User already exists.") } } I already use Inversion Of control and Dependency Injection and really don't like the ServiceLocator due to number of facts: Harder to maintain implicit dependencies. Harder to test the code. Potential threading issues. Explicit dependency only on the ServiceLocator. The code becomes harder to understand. Need to register the ServiceLocator interfaces during the testing. But on the other side, with plain POCO objects, I do not see any other way of performing the validation like above without ServiceLocator and only using IoC/DI. So the question would be: is there any way to use DI/IoC for the situation described above? Thanks, Dmitriy.

    Read the article

  • Shopping list for developing Windows app on Mac

    - by raj.tiwari
    Folks, I need to maintain a C#/.Net desktop application. So, I need to set myself up with Windows(7?) and Visual Studio. My current development machine is a Macbook Pro and I would like to continue using it. Overall, I am considering the following recipe: Install VMWare Fusion or Parallels or VirtualBox for running the Windows OS Buy a version of Windows to develop on Buy Windows Developer tools Having been in the open source universe all this time, I am utterly unfamiliar with the options/packages in the Windows world. I could use some help on the following: Does the recipe above look fine, or do I need to change something? What is a good VM environment to buy/use? VirtualBox is free, but Parallels/VMWare promise Windows app that blend in with my Mac windows. Could use some help on this topic Does MSFT sell a package deal which has bare bones Windows 7 and the necessary dev tools, or do I need to buy the OS and dev tools separately? Since I only need Windows to churn this C# desktop application, What is the OS version and flavor or Visual Studio I should get? Thanks in advance for any pointers. -Raj

    Read the article

  • DWM and painting unresponsive apps

    - by Doug Kavendek
    In Vista and later, if an app becomes unresponsive, the Desktop Window Manager is able to handle redrawing it when necessary (move a window over it, drag it around, etc.) because it has kept a pixel buffer for it. Windows also tries to detect when an app has become unresponsive after some timeout, and tries to make the best of the situation -- I believe it dims out the window, adds "Not Responding" to its title bar, and perhaps some other effects. Now, we have a skinned app that uses window regions and layered windows, and it doesn't play well with these effects. We've been developing on XP, but have noticed a strange effect when testing on Vista. At some points the app may spend a few moments on some calculation or callback, and if it passes the unresponsive threshold (I've read that it's a five second timeout, but I cannot find a link), a strange graphical problem occurs: any pixels that would be 100% transparent due to the window regions turn black, which effectively makes the window rectangular again, with a black background. There seem to be other anomalies, with the original window's pixels being shifted a bit in some child dialogs. I am working on reducing such delays (ideally Windows will never need to step in like this), and trying to maintain responsiveness while it's busy, but I'd still like to figure out what is causing it to render like that, as I can't guarantee I can eliminate all delays. Basically, I just would like to know what Windows is doing when this happens, and how I can make my app behave properly with it. Skinned apps have to still work on Vista and later, so I need to figure out what I'm doing that's non-standard. I don't even know exactly how to look for information into how Windows now handles unresponsive apps, as my searches only return people having issues with apps that are unresponsive, or very rudimentary explanations of what the DWM does with such apps. Heck I'm not even 100% sure it's the DWM responsible, but it seems likely. Any potential leads? Photo of problem; screen shots won't capture the effect (note that the white dialog's buffer is shifted -- it is shifted exactly by the distance it has been offset from the main (blue) window):

    Read the article

  • Is SQLDataReader slower than using the command line utility sqlcmd?

    - by Andrew
    I was recently advocating to a colleague that we replace some C# code that uses the sqlcmd command line utility with a SqlDataReader. The old code uses: System.Diagnostics.ProcessStartInfo procStartInfo = new System.Diagnostics.ProcessStartInfo("cmd", "/c " + sqlCmd); wher sqlCmd is something like "sqlcmd -S " + serverName + " -y 0 -h-1 -Q " + "\"" + "USE [" + database + "]" + ";+ txtQuery.Text +"\"";\ The results are then parsed using regular expressions. I argued that using a SQLDataReader woud be more in line with industry practices, easier to debug and maintain and probably faster. However, the SQLDataReader approach is at least the same speed and quite possibly slower. I believe I'm doing everything correctly with SQLDataReader. The code is: using (SqlConnection connection = new SqlConnection()) { try { SqlConnectionStringBuilder builder = new SqlConnectionStringBuilder(connectionString); connection.ConnectionString = builder.ToString(); ; SqlCommand command = new SqlCommand(queryString, connection); connection.Open(); SqlDataReader reader = command.ExecuteReader(); // do stuff w/ reader reader.Close(); } catch (Exception ex) { outputMessage += (ex.Message); } } I've used System.Diagnostics.Stopwatch to time both approaches and the command line utility (called from C# code) does seem faster (20-40%?). The SqlDataReader has the neat feature that when the same code is called again, it's lightening fast, but for this application we don't anticipate that. I have already done some research on this problem. I note that the command line utility sqlcmd uses OLE DB technology to hit the database. Is that faster than ADO.NET? I'm really suprised, especially since the command line utility approach involves starting up a process. I really thought it would be slower. Any thoughts? Thanks, Dave

    Read the article

  • PyGTK, Glade, Changing the window view and threads

    - by Gaunt Face
    Heya Everyone, Forgive me if this seems like a stupid question, just so far no where on the internet can I find someone offering a solution to this and I just wanted to get some feedback from someone with more experience than myself (I've only been using python, pyGTK and Glade for 2 days now). I have a UI window displaying and it updates with messages from a thread that is handling a bluetooth connection. This is fine and I have the application closing and running quite reliably, the problem is, after a bluetooth connection is made I wish to maintain the bluetooth thread (i.e. keep the connection going) but completely change the UI of the main window. Now the impression I am getting from pyGTK applications made from glade, is that the easiest thing to do is just open a new window. Is this really the best option? Can I cut the tree of widgets off at the root, maintaining the window widget but add on a new set of widgets from a separate glade file? If opening a new window is the best option, am I right in assuming that the bluetooth thread can be kept alive during this transition, providing I update any callbacks? Any help or pointers would be great. Cheers, Matt

    Read the article

  • SSIS Lookup with Lookup Component Vs Script Component.

    - by Nev_Rahd
    Hello, I need to load Dimensions from EDW Tables (which does maintain historical records) and is of type Key-Value-Parameter. My scenario is ok if got a record in EDW as below Key1 Key2 Code Value EffectiveDate EndDate CurrentFlag 100 555 01 AAA 2010-01-01 11.00.00 9999-12-31 Y 100 555 02 BBB 2010-01-01 11.00.00 9999-12-31 Y This need to be loaded into DM by pivoting it as key1 and key2 combinations makes Natural key for DM SK NK 01 02 EffectiveDate EndDate CurrentFlag 1 100-555 AAA BBB 2010-01-01 11.00.00 9999-12-31 Y My ssis package does this all good pivoting... looking up the incoming NK in DIM.. if new will insert .. else with further lookup with effective date and determine if the incoming for same natural key got any new (change) in attribute.. if so updates the current record byy setting its end date and insert the new one with new attribute value and pulling the recent records values for other attributes. My problem is if the same natural key comes twice with same attribute in single extract my first lookup which on natural key .. will let both records pass and try to insert.. where its fails. If i get distinct records on NK the second is not picked and need to run package again. So my question how can i configure lookup or alernative way to handle this scenario when same NK comes twice in single extract, would be able to insert first record if not exists in Dim table and for second one should be able to updated with the changes with reference to one inserted above. Not sure this makes sense what am trying to explain. Will attached the screenshot once back to work desk (on monday). Thanks

    Read the article

  • Maintaining session information between 2 asp.net calls programmatically?

    - by Santhosh
    Hi, I'm not sure if I'll be clear enough in my explaination to make you guys understand, but I'll try. Here's my problem: We have an external site which the users in our company connect to by giving their corresponding username and password. The external site is an ASP.NET website. We want to integrate this website into our intranet portal so that the users don't have to enter their UN/Pwd to login to the website. Since the target website has no provision for SSO, we are simulating the POST request to login. So far so good. We are now required to perform an action after the initial login is done, on an another page. We can simulate the corresponding POST request as well. But the problem is since we are not maintaining any session information in our initial POST request, it always redirects to the login screen! Is there any way to maintain ASP.NET session information between multiple calls done programmatically? Can we create an ASP.NET session id cookie programmatically and then pass it along with our initial request? Or this is not possible at all? Any comments are appreciated. Thanks for your help. Regards.

    Read the article

  • Good resources for building web-app in Tapestry

    - by Rich
    Hi, I'm currently researching into Tapestry for my company and trying to decide if I think we can port our pre-existing proprietary web applications to something better. Currently we are running Tomcat and using JSP for our front end backed by our own framework that eventually uses JDBC to connect to an Oracle database. I've gone through the Tapestry tutorial, which was really neat and got me interested, but now I'm faced with what seems to be a common issue of documentation. There are a lot of things I'd need to be sure that I could accomplish with Tapestry before I'd be ready to commit fully to it. Does anyone have any good resources, be it a book or web article or anything else, that go into more detail beyond what the Tapestry tutorial explains? I am also considering integrating with Hibernate, and have read a little bit about Spring too. I'm still having a hard time understanding how Spring would be more useful than cumbersome in tandem with Tapestry,as they seem to have a lot of overlapping features. An example I read seemed to use Spring to interface with Hibernate, and then Tapestry to Spring, but I was under the impression Tapestry integrates to the same degree with Hibernate. The resource I'm speaking of is http://wiki.apache.org/tapestry/Tapstry5First_project_with_Tapestry5,_Spring_and_Hibernate . I was interested because I hadn't found information anywhere else on how to maintain user levels and sessions through a Tapestry application before, but wasn't exactly impressed by the need to use Spring in the example.

    Read the article

  • Subversion - Do I need to reintegrate if I don't merge from trunk

    - by user314584
    Hi, I have read quite a bit about the need to re-integrate when you merge from a branch back to the trunk in SVN (This article was really helpful http://blogs.open.collab.net/svn/2008/07/subversion-merg.html). The problem seems to come from the fact that people are regularly updating the branch from the trunk which means that the final merge back is reflective. In my use-case, we want to create a release branch which will live for as long as it takes to stabilise the branch and fix any bugs. To maintain stability we don't want to merge up from the trunk but we do want to regularly merge fixes down from the release branch so that trunk gets all the bug fixes for free. We also don't want to wait until the end of QA to merge back to trunk. We therefore want to: 1.) Create the branch 2.) Make regular changes to the branch (and trunk) 3.) Merge back to trunk regularly (daily perhaps) Since we will never merge up from trunk I don't think that we need to worry about the problems that re-intergrating is designed to fix. Can anyone see a problem with this approach? Cheers, Matt

    Read the article

< Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >