Search Results

Search found 25148 results on 1006 pages for 'distributed source contr'.

Page 408/1006 | < Previous Page | 404 405 406 407 408 409 410 411 412 413 414 415  | Next Page >

  • Syncing magento database froms development to production

    - by ringerce
    I use git for version control. I have a development, staging and production environment. When I finish in development I push to staging for review by the client. When approved, I push changes from staging to production. That works fine as long as there is no database changes. What happens if I install modules via Magento connect on local development and it makes database modifications. How would I push those changes up to the production server since the production server is always changing? Edit: I wrote two shell scripts. One that pulls the production database down to my development server, replaces base url with develpment url and updates my development db accordingly. It also leaves the production sql dump behind to be added to my git repo. I'm not really sure if it's beneficial to keep the raw dumps in source control but I'm going to try it out. The second scripts moves the development database up to staging and essentially performs the same operations as the first. Now when it comes time to move to production I pull the updated production repo into the production server and allow magento to do it's thing. I also started using SQLYog recently and it has a database comparison wizard which will give me the differences in my development and production databases and allow me to merge the changes in selectively. It always creates a migration script that I added to source control as well. If anything goes wrong I can run the comparison to see if anything was missed. Does this sounds like a decent workflow to you guys?

    Read the article

  • Question about memory allocation when initializing char arrays in C/C++.

    - by Carlos Nunez
    Before anything, I apologize if this question has been asked before. I am programming a simple packet sniffer for a class project. For a little while, I ran into the issue where the source and destination of a packet appeared to be the same. For example, the source and destination of an Ethernet frame would be the same MAC address all of the time. I custom-made ether_ntoa(char *) because Windows does not seem to have ethernet.h like Linux does. Code snippet is below: char *ether_ntoa(u_char etheraddr[ETHER_ADDR_LEN]) { int i, j; char eout[32]; for(i = 0, j = 0; i < 5; i++) { eout[j++] = etheraddr[i] >> 4; eout[j++] = etheraddr[i] & 0xF; eout[j++] = ':'; } eout[j++] = etheraddr[i] >> 4; eout[j++] = etheraddr[i] & 0xF; eout[j++] = '\0'; for(i = 0; i < 17; i++) { if(eout[i] < 10) eout[i] += 0x30; else if(eout[i] < 16) eout[i] += 0x57; } return(eout); } I solved the problem by using malloc() to have the compiler assign memory (i.e. instead of char eout[32], I used char * eout; eout = (char *) malloc (32);). However, I thought that the compiler assigned different memory locations when one sized a char-array at compile time. Is this incorrect? Thanks! Carlos Nunez

    Read the article

  • Cocoa framework development: sharing between projects

    - by e.James
    I am currently developing a handful of similar Cocoa desktop apps. In an effort to share code between them, I have identified a set of core classes and functions that can be common across all of these applications. I would like to bundle this common code into a framework which all of my current applications (and any future ones) can link against. Now, here's the hard part: I'm going to be developing this framework as I go, so I need each of my desktop apps to have a reference to it, but I want to be able to edit the framework source code from within each of the app projects and have the framework automatically rebuilt as required. For example, let's say I have the Xcode project for DesktopAppNumberOne open, and I decide that one of my framework classes needs to be changed. I would like to: Open and edit the source file for that framework class without having to open the framework project in Xcode. Hit "build" on DesktopAppNumberOne, and see the framework rebuilt first (because one of its sources has changed), then see parts of DesktopAppNumberOne rebuilt (because one of the frameworks it links against has changed). I can see how to do this with only one app and one framework, but I'm having trouble figuring out how to do it with multiple apps that share a single framework. Has anyone had success with this approach? Am I perhaps going about this the wrong way? Any help would be appreciated.

    Read the article

  • How To Go About Updating Old C Code

    - by Ben313
    Hello: I have been working on some 10 year old C code at my job this week, and after implementing a few changes, I went to the boss and asked if he needed anything else done. Thats when he dropped the bomb. My next task was to go through the 7000 or so lines and understand more of the code, AND, to modularize the code somewhat. I asked him how he would like the source code modularized, and he said to start putting the old C code into c++ classes. Being a good worker, I nodded my head yes, and went back to my desk, where I sit now, wondering how in the world to take this code, and "modularize" it. Its already in 20 source files, each with its own purpose and function. in addition, there are three "main" structs. each of these stuctures has 30 plus fields, many of them being other, smaller sturcts. Its a complete mess to try to understand, but almost every single function in the program is passed a pointer to one of the structs, and uses the struct heavily. Is there any clean way for me to shoehorn this into classes? I am resolved to do it if it can be done, I just have no idea how to begin.

    Read the article

  • WPF: How to bind and update display with DataContext

    - by Am
    I'm trying to do the following thing: I have a TabControl with several tabs. Each TabControlItem.Content points to PersonDetails which is a UserControl Each BookDetails has a dependency property called IsEditMode I want a control outside of the TabControl , named ToggleEditButton, to be updated whenever the selected tab changes. I thought I could do this by changing the ToggleEditButton data context, by it doesn't seem to work (but I'm new to WPF so I might way off) The code changing the data context: private void tabControl1_SelectionChanged(object sender, SelectionChangedEventArgs e) { if (e.Source is TabControl) { if (e.Source.Equals(tabControl1)) { if (tabControl1.SelectedItem is CloseableTabItem) { var tabItem = tabControl1.SelectedItem as CloseableTabItem; RibbonBook.DataContext = tabItem.Content as BookDetails; ribbonBar.SelectedTabItem = RibbonBook; } } } } The DependencyProperty under BookDetails: public static readonly DependencyProperty IsEditModeProperty = DependencyProperty.Register("IsEditMode", typeof (bool), typeof (BookDetails), new PropertyMetadata(true)); public bool IsEditMode { get { return (bool)GetValue(IsEditModeProperty); } set { SetValue(IsEditModeProperty, value); SetValue(IsViewModeProperty, !value); } } And the relevant XAML: <odc:RibbonTabItem Title="Book" Name="RibbonBook"> <odc:RibbonGroup Title="Details" Image="img/books2.png" IsDialogLauncherVisible="False"> <odc:RibbonToggleButton Content="Edit" Name="ToggleEditButton" odc:RibbonBar.MinSize="Medium" SmallImage="img/edit_16x16.png" LargeImage="img/edit_32x32.png" Click="Book_EditDetails" IsChecked="{Binding Path=IsEditMode, Mode=TwoWay}"/> ... There are two things I want to accomplish, Having the button reflect the IsEditMode for the visible tab, and have the button change the property value with no code behind (if posible) Any help would be greatly appriciated.

    Read the article

  • Maven - 'all' or 'parent' project for aggregation?

    - by disown
    For educational purposes I have set up a project layout like so (flat in order to suite eclipse better): -product | |-parent |-core |-opt |-all Parent contains an aggregate project with core, opt and all. Core implements the mandatory part of the application. Opt is an optional part. All is supposed to combine core with opt, and has these two modules listed as dependencies. I am now trying to make the following artifacts: product-core.jar product-core-src.jar product-core-with-dependencies.jar product-opt.jar product-opt-src.jar product-opt-with-dependencies.jar product-all.jar product-all-src.jar product-all-with-dependencies.jar Most of them are fairly straightforward to produce. I do have some problem with the aggregating artifacts though. I have managed to make the product-all-src.jar with a custom assembly descriptor in the 'all' module which downloads the sources for all non-transitive deps, and this works fine. This technique also allows me to make the product-all-with-dependencies.jar. I however recently found out that you can use the source:aggregate goal in the source plugin to aggregate sources of the entire aggregate project. This is also true for the javadoc plugin, which also aggregates through the usage of the parent project. So I am torn between my 'all' module approach and ditching the 'all' module and just use the 'parent' module for all aggregation. It feels unclean to have some aggregate artifacts produced in 'parent', and others produced in 'all'. Is there a way of making an 'product-all' jar in the parent project, or to aggregate javadoc in the 'all' project? Or should I just keep both? Thanks

    Read the article

  • ManyToOne annotation fails with Hibernate 4.1: MappingException

    - by barelas
    Using Hibernate 4.1.1.Final. When I try to add @ManyToOne, schema creation fails with: org.hibernate.MappingException: Could not instantiate persister org.hibernate.persister.entity.SingleTableEntityPersister User.java: @Entity public class User { @Id private int id; public int getId() {return id;} public void setId(int id) {this.id = id;} @ManyToOne Department department; public Department getDepartment() {return department;} public void setDepartment(Department department) {this.department = department;} } Department.java @Entity public class Department { @Id private int departmentNumber; public int getDepartmentNumber() {return departmentNumber;} public void setDepartmentNumber(int departmentNumber) {this.departmentNumber = departmentNumber;} } hibernate.properties: hibernate.connection.driver_class=com.mysql.jdbc.Driver hibernate.connection.url=jdbc:mysql://localhost:3306/dbname hibernate.connection.username=user hibernate.connection.password=pass hibernate.connection.pool_size=5 hibernate.dialect=org.hibernate.dialect.MySQL5InnoDBDialect hibernate.hbm2ddl.auto=create init (throwing exception): ServiceRegistry serviceRegistry = new ServiceRegistryBuilder().buildServiceRegistry(); sessionFactory = new MetadataSources( serviceRegistrY.addAnnotatedClass(Department.class).addAnnotatedClass(User.class).buildMetadata().buildSessionFactory(); exception throwed at init: org.hibernate.MappingException: Could not instantiate persister org.hibernate.persister.entity.SingleTableEntityPersister at org.hibernate.persister.internal.PersisterFactoryImpl.create(PersisterFactoryImpl.java:174) at org.hibernate.persister.internal.PersisterFactoryImpl.createEntityPersister(PersisterFactoryImpl.java:148) at org.hibernate.internal.SessionFactoryImpl.<init>(SessionFactoryImpl.java:820) at org.hibernate.metamodel.source.internal.SessionFactoryBuilderImpl.buildSessionFactory(SessionFactoryBuilderImpl.java:65) at org.hibernate.metamodel.source.internal.MetadataImpl.buildSessionFactory(MetadataImpl.java:340) I have tried adding some other annotations, but shouldn't the defaults work and create the tables and foreign key? If I remove the department from User, tables get generated fine. Thanks in advance!

    Read the article

  • Is ActiveMQ unreliable?

    - by user122991
    Hello, We have been using ActiveMQ 5.2 in our distributed enterprise application for about 3 months. During that time, we have experienced debilitating failures at least twice weekly. In particular, we see: 1) Topic publisher has its connection arbitrarily closed and experiences EOF on attempt to publish. Note well that this issue is not a function of some timeout. It does not correlate reliably with any inactivity. 2) Queue listeners never receive message. Message simply sits on Queue. 2) is much rarer (hardly ever) than 1). In both cases, the failures are highly intermittent-- they cannot be reliably reproduced through any testing usage pattern. Also, there are no errors or warning in the AMQ logs. Have others experienced similar problems? Is there an opinion that some other JMS provider is more reliable? thanks, Joe

    Read the article

  • Building a minimal plugin architecture in Python.

    - by dF
    I have an application, written in Python, which is used by a fairly technical audience (scientists). I'm looking for a good way to make the application extensible by the users, i.e. a scripting/plugin architecture. I am looking for something extremely lightweight. Most scripts, or plugins, are not going to be developed and distributed by a third-party and installed, but are going to be something whipped up by a user in a few minutes to automate a repeating task, add support for a file format, etc. So plugins should have the absolute minimum boilerplate code, and require no 'installation' other than copying to a folder (so something like setuptools entry points, or the Zope plugin architecture seems like too much.) Are there any systems like this already out there, or any projects that implement a similar scheme that I should look at for ideas / inspiration?

    Read the article

  • How do I iterate over a collection that is in an object passed as parameter in a jasper report?

    - by spderosso
    Hi, I have an object A that has as an instance variable a collection of object Bs. Example: public class A{ String name; List<B> myList; ... public List<B> getMyList(){ return myList; } ... } I want this object to be the only source of information the jasper report gets, since all the information the report need is in A. I am currently doing something like: A myObjectA = new A(...); InputStream reportFile = MyPage.this.getClass().getResourceAsStream("test.jrxml"); HashMap<String, Object> parameters = new HashMap<String, Object>(); parameters.put("objectA", myObjectA); ... JasperReport report = JasperCompileManager.compileReport(reportFile); JasperPrint print = JasperFillManager.fillReport(report, parameters, new JRBeanCollectionDataSource(myObjectA.getMyList())); return JasperExportManager.exportReportToPdf(print); thereby passing "two" parameters, the objectA as a concrete parameter and the collection of object Bs that is in A as a bean data source. How do I iterate over the Bs in A by passing only A? Thanks!

    Read the article

  • getting CS1502 compiler error on dev environment but not production.

    - by nw
    When I try to run my ASP.NET app from my development environment I get the following error message: Compiler Error Message: CS1502: The best overloaded method match for 'mmars.Printing.printFunctions.SetPrintSummaryProperties(mmars.contextInfo, ref mmars.Printing.printObjSummary)' has some invalid arguments. When I publish and run on our production server I don't get this error. It seems to compile fine when I build from the build menu (in fact if I change the second argument of the bolded function call below, i get a compiler error in visual studio), but now i've suddenly started getting this error message at runtime. So another question I have in addition to getting rid of the error is why is the .NET development server even trying to do JIT compilation on my project if it is already compiled into a DLL? Printing.printObjSummary myPrintObj = new Printing.printObjSummary(); Printing.printFunctions.SetPrintSummaryProperties(ci, ref myPrintObj); printObjects.Add(myPrintObj); This seems to have just suddenly appeared from nowhere today and it's extremely frustrating. Also, though there are no warnings at compile-time, when I get redirected to the page with that first compilation error there are many warnings like the following: Warning: CS0436: The type 'mmars.MMARSSummaryDataItem' in 'c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\root\3dad423c\40569048\App_Code.b0rgpkzr.4.cs' conflicts with the imported type 'mmars.MMARSSummaryDataItem' in 'c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\root\3dad423c\40569048\assembly\dl3\7179c19a\345f948c_ece7ca01\mmars.DLL'. Using the type defined in 'c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\root\3dad423c\40569048\App_Code.b0rgpkzr.4.cs'. What's the deal with that? Is the webserver complaining about name conflicts in the source file and dll resulting from the source file?

    Read the article

  • SharpDevelop WIX project: MSBuild Configurations

    - by chezy525
    Using SharpDevelop, I wrote a windows service with a WIX setup project to install/auto-start it. For testing purposes, I've done a number of things I don't want to do in the release version (i.e. add an uninstall shortcut to the desktop). So, my question really boils down to this; how do you handle build configurations within a WiX project? I think I've solved most of my problems after I found this question Passing build parameters to .wxs file to dynamicaly build wix installers. And thus far I've done the following: Added a property that checks the Configuration variable <Product> ... <Property Id="DEBUG">$(var.Configuration) == 'Debug'</Property> ... Separated all of the debug files into unique components and setup as a separate feature with a condition checking the DEBUG property. <Product> ... <Feature> ... <Feature Id="DebugFiles" Level="1"> <ComponentRef Id="UninstallShortcutComponent" /> <Condition Level="0">DEBUG</Condition> </Feature> ... Then, finally, pointing to the correct file based on the configuration, using the Configuration variable <Directory> ... <Component> <File Source="..\mainProject\bin\$(var.Configuration)\main.exe" /> </Component> ... So, now my question is simplified to how to handle files that may not exist under certain build configurations (like .pdb files). Using all of the above (including pointing the file source to the ...\bin\Release\*.pdb, which I know isn't expected to exist) I get a LGHT0103 compiler error, it can't find the file.

    Read the article

  • Multiple inequality conditions (range queries) in NoSQL

    - by pableu
    Hi, I have an application where I'd like to use a NoSQL database, but I still want to do range queries over two different properties, for example select all entries between times T1 and T2 where the noiselevel is smaller than X. On the other hand, I would like to use a NoSQL/Key-Value store because my data is very sparse and diverse, and I do not want to create new tables for every new datatype that I might come across. I know that you cannot use multiple inequality filters for the Google Datastore (source). I also know that this feature is coming (according to this). I know that this is also not possible in CouchDB (source). I think I also more or less understand why this is the case. Now, this makes me wonder.. Is that the case with all NoSQL databases? Can other NoSQL systems make range queries over two different properties? How about, for example, Mongo DB? I've looked in the Documentation, but the only thing I've found was the following snippet in their docu: Note that any of the operators on this page can be combined in the same query document. For example, to find all document where j is not equal to 3 and k is greater than 10, you'd query like so: db.things.find({j: {$ne: 3}, k: {$gt: 10} }); So they use greater-than and not-equal on two different properties. They don't say anything about two inequalities ;-) Any input and enlightenment is welcome :-)

    Read the article

  • Project management and bundling dependencies

    - by Joshua
    I've been looking for ways to learn about the right way to manage a software project, and I've stumbled upon the following blog post. I've learned some of the things mentioned the hard way, others make sense, and yet others are still unclear to me. To sum up, the author lists a bunch of features of a project and how much those features contribute to a project's 'suckiness' for a lack of a better term. You can find the full article here: http://spot.livejournal.com/308370.html In particular, I don't understand the author's stance on bundling dependencies with your project. These are: == Bundling == Your source only comes with other code projects that it depends on [ +20 points of FAIL ] Why is this a problem, (especially given the last point)? If your source code cannot be built without first building the bundled code bits [ +10 points of FAIL ] Doesn't this necessarily have to be the case for software built against 3rd party libs? Your code needs that other code to be compiled into its library before the linker can work? If you have modified those other bundled code bits [ +40 points of FAIL ] If this is necessary for your project, then it naturally follows that you've bundled said code with yours. If you want to customize a build of some lib,say WxWidgets, you'll have to edit that projects build scripts to bulid the library that you want. Subsequently, you'll have to publish those changes to people who wish to build your code, so why not use a high level make script with the params already written in, and distribute that? Furthermore, (especially in a windows env) if your code base is dependent on a particular version of a lib (that you also need to custom compile for your project) wouldn't it be easier to give the user the code yourself (because in this case, it is unlikely that the user will already have the correct version installed)? So how would you respond to these comments, and what points may I be failing to take into consideration? Would you agree or disagree with the author's take (or mine), and why?

    Read the article

  • binding a command inside a listbox item to a property on the viewmodel parent

    - by gideon
    I've been working on this for about an hour and looked at all related SO questions. My problem is very simple: I have HomePageVieModel: HomePageVieModel +IList<NewsItem> AllNewsItems +ICommand OpenNews My markup: <Window DataContext="{Binding HomePageViewModel../> <ListBox ItemsSource="{Binding Path=AllNewsItems}"> <ListBox.ItemTemplate> <DataTemplate> <StackPanel> <TextBlock> <Hyperlink Command="{Binding Path=OpenNews}"> <TextBlock Text="{Binding Path=NewsContent}" /> </Hyperlink> </TextBlock> </StackPanel> </DataTemplate> </ListBox.ItemTemplate> The list shows fine with all the items, but for the life of me whatever I try for the Command won't work: <Hyperlink Command="{Binding Path=OpenNewsItem, RelativeSource={RelativeSource AncestorType=vm:HomePageViewModel, AncestorLevel=1}}"> <Hyperlink Command="{Binding Path=OpenNewsItem, RelativeSource={RelativeSource AncestorType=vm:HomePageViewModel,**Mode=FindAncestor}**}"> <Hyperlink Command="{Binding Path=OpenNewsItem, RelativeSource={RelativeSource AncestorType=vm:HomePageViewModel,**Mode=TemplatedParent}**}"> I just always get : System.Windows.Data Error: 4 : Cannot find source for binding with reference ..... Update I am setting my ViewModel like this? Didn't think this would matter: <Window.DataContext> <Binding Path="HomePage" Source="{StaticResource Locator}"/> </Window.DataContext> I use the ViewModelLocator class from the MVVMLight toolkit which does the magic.

    Read the article

  • Messages not forwarded to error queue when exception is thrown in handler (it works on my machine)

    - by darthjit
    e are using NServicebus 4.0.5 with sql server(sql server 2012) as transport. When the handler throws an exception, NSB does not retry or move the message to the error queue. Successful messages make it to the audit queue but the failed/errored ones don't! . Interestingly, all this works on our local machines(windows 7 ,sql server localdb) but not on windows server 2012 (sql server 2012). Here is the config info on the subscriber: <add name="NServiceBus/Transport" connectionString="Data Source=xxx;Initial Catalog=NServiceBus;Integrated Security=SSPI;Enlist=false;" /> <add name="NServiceBus/Persistence" connectionString="Data Source=xxx;Initial Catalog=NServiceBus;Integrated Security=SSPI;Enlist=false;" /> <MessageForwardingInCaseOfFaultConfig ErrorQueue="error" /> <UnicastBusConfig ForwardReceivedMessagesTo="audit"> <MessageEndpointMappings> <add Assembly="Services.Section.Messages" Endpoint= "Services.ACL.Worker" /> </MessageEndpointMappings> </UnicastBusConfig> And in code it is configured as follows: public class EndpointConfig : IConfigureThisEndpoint, AsA_Server, IWantCustomInitialization { public void Init() { IContainer container = ContainerInstanceProvider. GetContainerInstance(); Configure .Transactions.Enable(); Configure.With() .AutofacBuilder(container) .UseTransport<SqlServer>() .Log4Net() //.Serialization.Json() .UseNHibernateSubscriptionPersister() .UseNHibernateTimeoutPersister() .MessageForwardingInCaseOfFault() .RijndaelEncryptionService() .DefiningCommandsAs(type => type.Namespace != null &&type .Namespace.EndsWith("Commands")) .DefiningEventsAs(type => type.Namespace != null &&type .Namespace.EndsWith("Events")) .UnicastBus(); } } Any ideas on how to fix this? here is the log info (there is a lot there, search for error to see the relevant parts) https://gist.github.com/ranji/7378249

    Read the article

  • c# multi threaded file processing

    - by user177883
    There is a folder that contains 1000 of small text files. I aim to parse and process all of them while more files are being populated in to the folder. My intention is to multithread this operation as the single threaded prototype took 6 minutes to process 1000 files. I like to have reader and writer thread(s) as following : while the reader thread(s) are reading the files, I d like to have writer thread(s) to process them. Once the reader is started reading a file, I d like to mark it as being processed, such as by renaming it, once it s read, rename it to completed. How to approach such multithreaded application ? Is it better to use a distributed hash table or a queue? Which data structure to use that would avoid locks? Would you have a better approach to this scheme that you like to share?

    Read the article

  • assistance with classifying tests

    - by amateur
    I have a .net c# library that I have created that I am currently creating some unit tests for. I am at present writing unit tests for a cache provider class that I have created. Being new to writing unit tests I have 2 questions These being: My cache provider class is the abstraction layer to my distributed cache - AppFabric. So to test aspects of my cache provider class such as adding to appfabric cache, removing from cache etc involves communicating with appfabric. Therefore the tests to test for such, are they still categorised as unit tests or integration tests? The above methods I am testing due to interacting with appfabric, I would like to time such methods. If they take longer than a specified benchmark, the tests have failed. Again I ask the question, can this performance benchmark test be classifed as a unit test? The way I have my tests set up I want to include all unit tests together, integration tests together etc, therefore I ask these questions that I would appreciate input on.

    Read the article

  • C / JSON Library in popular Linux distros?

    - by Tim Post
    I have a program written in C that has to input and output JSON over a local domain socket. I've found several C / JSON libraries that 'almost work' through searches. Prior to taking one of the libraries that I found .. I want to be sure that I'm not over-looking a library that is commonly found on modern Linux distros. I'd also really appreciate links to libraries that you use. Most likely, I'll just drop it in tree, unless I realize that I've over looked something widely distributed. I am tagging this as subjective because the answer that I select is the one linking to a library that works for me, that does not mean its the 'best' library. I want to take an existing array and easily convert it to a buffer that can be sent, or take a buffer and easily convert it into an allocated array. Thanks in advance!

    Read the article

  • SPRING: How do you programmatically instantiate classes based on information passed from Flex UI

    - by babyangel86
    Imagine the UI passes back an XMl node as such: <properties> <type> Source </type> <name> Blooper </name> <delay> <type> Deterministic </type> <parameters> <param> 4 </param> </parameters> <delay> <batch> <type> Erlang </type> <parameters> <param> 4 </param> <param> 6 </param> </parameters> <batch> And behind the scene what it is asking that you instantiate a class as such: new Source("blooper", new Exp(4), new Erlang(4,6); The problem lies in the fact that you don't know what class you will need to processing, and you will be sent a list of these class definitions with instructions on how they can be linked to each other. I've heard that using a BeanFactoryPostProcessor might be helpful, or a property editor/convertor. However I am at a loss as to how best to use them to solve my problem. Any help you can provide will be much appreciated.

    Read the article

  • Basic Team Foundation Server 2010 Question - System Resource Usage?

    - by user127954
    Guys / Gals i have a real basic Team Foundation Server 2010 question. For those of you who have played around with tfs 2010 is it a lot more light weight than tfs2008 is? I remember installing all the pieces needed for TFS 2008 one one machine at work. I remember it being a pain to install (i know 2010 is supposed to be much better) We wanted to play around with it a little bit to see if it met our needs. Well it brought that machine to a screeching halt. I'm needing a source control repository for home and i thought why not just install tfs 2010 so i can get familiar with it and maybe in the future i can make a better sell to my organization and FINALLY get them to move off of Source Safe but my concern is i only have one server at home (granted i already have SQL Server installed) and don't want to buy a machine just for this purpose. I'd also like to get more familiar with CI too. Anyways, if team is going to be to heavy i'll just use subversion but i'd like to use TFS if possible. Any help would be appreciated. thanks, Ncage

    Read the article

  • What does it mean to double license?

    - by Adrian Panasiuk
    What does it mean to double license code? I can't just put both licenses in the source files. That would mean that I mandate users to follow the rules of both of them, but the licenses will probably be contradictory (otherwise there'd be no reason to double license). I guess this is something like in cryptographic chaining, cipher = crypt_2(crypt_1(clear)) (generally) means, that cipher is neither the output of crypt_2 on clear nor the output of crypt_1 on clear. It's the output of the composition. Likewise, in double-licensing, in reality my code has one license, it's just that this new license says please follow all of the rules of license1, or all of the rules of license2, and you are hereby granted the right to redistribute this application under this "double" license, license1 or license2, or any license under which license1 or license2 allow you to redistribute this software, in which case you shall replace the relevant licensing information in this application with that of the new license. (Does this mean that before someone may use the app under license1, he has to perform the operation of redistributing to self? How would he document the fact that he did that operation?) Am I correct. What LICENSE file and what text to put in the source files would I need if I wanted to double license on, for the sake of example, Apachev2 and GPLv3 ?

    Read the article

  • Is there a generic way of dealing with varying connection strings in C#?

    - by James Wiseman
    I have an application that needs to connect to a SQL database, and execute a SQL Agent Job. The connection string I am trying to access is stored in the registry, which is easily enough pulled out. This appliction is to be run on multiple computers, and I cannot guarantee the format of this connection string being consistent across these computers. Two that I have pulled out for example are: Data Source=Server1;Initial Catalog=DB1;Integrated Security=SSPI; Data Source=Server2;Initial Catalog=DB1;Provider=SQLNCLI.1;Integrated Security=SSPI;Auto Translate=False; I can use an object of type System.Data.SqlClient.SqlConnection to connect to the database with the first connection string, howevever, I get the following error when I pass the second to it: keyword not supported: 'provider' Similarly, I can use the an object of type System.Data.OleDb.OleDbConnection to connect to the database with the second connection string, howevever, I get the following error when I pass the first to it: An OLEDB Provider was not specified in the ConnectionString' I can solve this by scanning the string for 'Provider' and doing the connect conditionally, however I can't help but feel that there is a better way of doing this, and handle the connection strings in a more generic fashion. Does anyone have any suggestions?

    Read the article

  • I did my own web framework: now, how keep it sync with applications? must I use versions?

    - by Daniel Koch
    ... and I did the first web application using it, now I'm going to create the second. In this first web application I enhanced the framework's core library with new things and promptly updated framework branch. I'm using bazaar to keep framework and web application committed. The application was in the beginning, a full branch of framework source tree, now I'm updating framework manually at every change on core files. (copying changed files from web app to framework's branch). With this second web application that I'm going to create, I need to know about versions (or revisions) which the application is based. If I found a bug in this version I can fix and then sync files with first web application no worrying: functions will be the same to this application. If I'm going to make changes in core (new behavior, new functions in library or something new in source tree) it must be named as "new version". What's the best way to do this? Because I'm using a Distributed Version Control System (bazaar), I'm not dealing with VERSIONS, but revision numbers that change every time. Please fresh my mind with new ideas.

    Read the article

  • Binding not working correctly in silverlight

    - by Harsh Maurya
    I am using a DataGrid in my silverlight project which contains a custom checkbox column. I have binded its command property with a property of my ViewModel class. Now, the problem is that I want to send the "selected item" of DataGrid through the command paramter for which I have written the following code : <sdk:DataGrid AutoGenerateColumns="False" Margin="10,0,10,0" Name="dataGridOrders" ItemsSource="{Binding OrderList}" Height="190"> <sdk:DataGrid.Columns> <sdk:DataGridTemplateColumn Header="Select"> <sdk:DataGridTemplateColumn.CellTemplate> <DataTemplate> <CheckBox> <is:Interaction.Triggers> <is:EventTrigger EventName="Checked"> <is:InvokeCommandAction Command="{Binding Source={StaticResource ExecutionTraderHomePageVM},Path=OrderSelectedCommand,Mode=TwoWay}" CommandParameter="{Binding ElementName=dataGridOrders,Path=SelectedItem}" /> </is:EventTrigger> <is:EventTrigger EventName="Unchecked"> <is:InvokeCommandAction Command="{Binding Source={StaticResource ExecutionTraderHomePageVM},Path=OrderSelectedCommand,Mode=TwoWay}" CommandParameter="{Binding ElementName=dataGridOrders,Path=SelectedItem}" /> </is:EventTrigger> </is:Interaction.Triggers> </CheckBox> But I am always getting null in the parameter of my command's execute method. I have tried with other properties of DataGrid such as Width, ActualHeight e.t.c. but of no use. What am I missing here ?

    Read the article

< Previous Page | 404 405 406 407 408 409 410 411 412 413 414 415  | Next Page >