Search Results

Search found 433 results on 18 pages for 'centralized'.

Page 2/18 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • In centralized version control, is it always good to update often?

    - by janos
    Assuming that: You are in a team developing some software. Your team is using centralized version control in the development process. You are working on a new feature which will surely take several days to complete, and you won't be able to commit before that because it would break the build. Your team members commit something every day that affects some of the files you're working with for your fancy new feature. Since this is centralized version control, you will have to update your local checkout at some point: at least once right before committing the new feature. If you update only once right before your commit, then there might be a lot of conflicts due to the many other changes by your teammates, which could be a world of pain to resolve all at once. Or, you could update often, and even if there are a few conflicts to resolve day by day, it should be easier to do, little by little. Can we say that it is always a good idea to update often?

    Read the article

  • What are options for 3rd Party Centralized Software Settings Management?

    - by Jeff Martin
    I am an architect in an enterprise looking to build a SaaS solution. Our products are distributed over many different deployable containers, Web Services, Web UI's, etc. I am looking for some open-source or 3rd party software solution to manage the settings of our application. These would be similar to the settings you might find in Word or Eclipse or Visual Studio. The settings would control various behaviors and features of the product. (Probably not settings like which database to connect to but more like, should I show line numbers on the page or not by default..). Ideally, we would be able to store values for different dimensions (by tenant, by user, by application environment... ) Because we have so many different deployables, I am looking for a centralized solution that can provide a web service that each of the deployables can get their individual settings from. Does anyone know of a centralized service providing this sort of features or give me some help in searching for an alternative to rolling our own?

    Read the article

  • Marking Changes to database...

    - by KoolKabin
    Hi guys... I am developing application to be run in central server and distributed computers. I am supposed to write application to backup the data from distributed machines and merge it in central server. I thought of compressing whole local database and sending it to server for merging. But as the database size grows the size of compress file also began to grow. So is there any way to merge data in central server without sending whole database. I need to do it on daily basis. Daily take backup and send to server

    Read the article

  • Sell me Distributed revision control

    - by ring bearer
    I know 1000s of similar topics floating around. I read at lest 5 threads here in SO But why am I still not convinced about DVCS? I have only following questions (note that I am selfishly worried only about Java projects) What is the advantage or value of committing locally? What? really? All modern IDEs allows you to keep track of your changes? and if required you can restore a particular change. Also, they have a feature to label your changes/versions at IDE level!? what if I crash my hard drive? where did my local repository go? (so how is it cool compared to checking in to a central repo?) Working offline or in an air plane. What is the big deal?In order for me to build a release with my changes, I must eventually connect to the central repository. Till then it does not matter how I track my changes locally. Ok Linus Torvalds gives his life to Git and hates everything else. Is that enough to blindly sing praises? Linus lives in a different world compared to offshore developers in my mid-sized project? Pitch me!

    Read the article

  • Does any faster centralized version control than SVN exists?

    - by Savageman
    Hello, I've been using SVN since a long time and now we're trying on Git. I'm not talking on the centralized / decentralized debate here. My only concern is speed. The latter tool is much faster. But sometimes, I NEED to work with a centralized approach, which is much more simple and less complex than the decentralized one. The learning curve is really fast, which saves a lot of time (while digging into decentralized would lead to a waste of time, given the learning curve is much longer and we encounter more problem when working with it). However, SVN is really slow compared to GIT, and I don't think it has anything to do with the centralized argument. Decentralized systems also have to deal with server connections and file transfert. So I can easilly imagine a faster implementation of centralized version control could exists. Does someone has any clue on this?

    Read the article

  • Centralizing Messagebox handling for application

    - by DRapp
    I'm wondering how others deal with trying to centralize MessageBox function calling. Instead of having long text embedded all over the place in code, in the past (non .net language), I would put system and application base "messagebox" type of messages into a database file which would be "burned" into the executable, much like a resource file in .Net. When a prompting condition would arise, I would just do call something like MBAnswer = MyApplication.CallMsgBox( IDUserCantDoThat ) then check the MBAnswer upon return, such as a yes/no/cancel or whatever. In the database table, I would have things like what the messagebox title would be, the buttons that would be shown, the actual message, a special flag that automatically tacked on a subsequent standard comment like "Please contact help desk if this happens.". The function would call the messagebox with all applicable settings and just return back the answer. The big benefits of this was, one location to have all the "context" of messages, and via constants, easier to read what message was going to be presented to the user. Does anyone have a similar system in .Net to do a similar approach, or is this just a bad idea in the .Net environment.

    Read the article

  • How can I handle all my errors/messages in one place on an Asp.Net page?

    - by Atomiton
    Hi all, I'm looking for some guidance here. On my site I put things in Web user controls. For example, I will have a NewsItem Control, an Article Control, a ContactForm control. These will appear in various places on my site. What I'm looking for is a way for these controls to pass messages up to the Page that they exist on. I don't want to tightly couple them, so I think I will have to do this with Events/Delegates. I'm a little unclear as to how I would implement this, though. A couple of examples: 1 A contact form is submitted. After it's submitted, instead of replacing itself with a "Your mail has been sent" which limits the placement of that message, I'd like to just notify the page that the control is on with a Status message and perhaps a suggested behaviour. So, a message would include the text to render as well as an enum like DisplayAs.Popup or DisplayAs.Success 2 An Article Control queries the database for an Article object. Database returns an Exception. Custom Exception is passed to the page along with the DisplayAs.Error enum. The page handles this error and displays it wherever the errors go. I'm trying to accomplish something similar to the ValidationSummary Control, except that I want the page to be able to display the messages as the enum feels fit. Again, I don't want to tightly bind or rely a control existing on the Page. I want the controls to raise these events, but the page can ignore them if it wants. Am I going about this the right way? I'd love a code sample just to get me started. I know this is a more involved question, so I'll wait longer before voting/choosing the answers.

    Read the article

  • How to set up a centralized backup server with lots of offsite workstations, intermittent internet connectivity, and stubborn users?

    - by Zac B
    This might be an impossible question. Context: We have a bunch of computers across around 1000 users. We have a centralized office where 900 of the users work, most of the time. Most of the computers are laptops. They are very frequently coming on and off the network for hours at a time. Users often take their computers home and do lots of work from home. In addition, there are a handful of users who work elsewhere in the country, who are offline (no internet connection whatsoever) for more than half of the time they use their machines. All of the machines are Windows 7/XP. Problem: People are always losing data. One day someone accidentally deletes a bunch of files. The next day someone else installs a bad driver or tries to mess with something in system32 and needs a personal data backup/reinstall of Windows. Because of how many of our business operations are done without an internet connection, and how frequently computers come on- and offline, it's unfeasible to make users use network storage for all of their data. We tried giving them Dropboxes, and they stored their files elsewhere. We bought and deployed Altiris, and they uninstalled it and blamed us when they couldn't get files back that they accidentally deleted while they were offline and hadn't taken a backup in months. We tried teaching them backup best-practices, and using scheduled sync tools to upload things to the network drives, and they turned them off because they "looked like viruses". It doesn't help that many of these users are pretty high up in the business and are not amicable to any sort of "you need to do something regularly because we say so" solution. Question: Other than finding another job where IT is treated differently and users are willing to follow best practices, how would people recommend I implement a file backup solution that supports the following: Backs up to a centralized server over LAN or WAN whenever a network link becomes available, or on a schedule. Supports interrupted/resumed backups (and hopefully file-delta only backups), since connections to the network (WAN or LAN) are often slow and only open for half an hour or so. Supports relatively rapid, "I accidentally deleted the TPS reports! Oh no!" single-file recovery, ideally administered from the central backup server rather than the client PC. Supports local-to-local file delta backup on a schedule, so that users without a network connection for a few days can still retrieve accidental deletions or whatnot. Ideally, the local stored backups would be pushed up to the server whenever network link is available. Isn't configurable on the clients without certain credentials. Because the CFOs (who won't give up their admin rights on the domain) will disable it if they can. Backs up the entire hard drive. There are people who are self-righteous about storing things in C:\, or in the recycle bin, or in the C:\Windows dir (yes, I know). I'm fine integrating multiple products/solutions, or scripting different programs together myself (I'm a somewhat competent programmer), but I've been drawing a blank on where to start. Dropbox is folder-specific, Altiris doesn't cope with LAN outages or interrupted/resumed backups, Volume Shadow Copy is awesome for a local-to-local solution, but I don't know how to push days of stored shadow copies up to a server in a 2 hour window of network access. The company is fine with spending decent money on this, thousands (USD) on a server, and hundreds on clients, if necessary. I want to emphasize that this isn't a shopping list request. While I wish there was a program out there that did what I want, I've looked pretty hard, and not found anything that fits the bill. Instead, I'm hoping for ideas on where to start hacking things together from scratch/from different technologies to make something stable that works. Cheers!

    Read the article

  • How to repair damaged repository (which has a centralized .svn directory)?

    - by Heinrich Ulbricht
    I recently upgraded my TortoiseSVN installation to version 1.7.1. This forced me to upgrade my working copy as well. The upgrade removed all (but one) of the .svn directories from all subdirectories leaving only one in the root. Now out of the blue (of course; I suspect my antivirus software) there is an error when I for example try to clean up the working copy. I am also not able to commit anything. The error message when cleaning up is: Cleanup failed to process the following paths: C:\svn Can't open file 'C:\svn.svn\pristine\73\73bcc5fa7819f84f56b81dfa0236f0aac7b7d404.svn-base': The system cannot find the file specified. I traced the error to be related to the presence of one directory within the working copy. If I rename it then everything works. When it is present I get the error. I also deleted it and checked it out again. No change, the error persists. With previous versions I could repair damages in the .svn easily: just delete the offending folder and check out again. I cannot do this anymore because now the .svn dir is centralized. What could I do to repair my working copy?

    Read the article

  • Simple, centralized user management on a small LAN - NIS or LDAP?

    - by einpoklum
    I'm setting up a small LAN for my team. It will, for all intents and purposes, not be connected to any external networks. I would it to have centralized control of user accounts (at least, I think I'd like that; I'm also considering using puppet, so theoretically I could just push /etc/passwd changes, or something). The number of machines is fixed, but not very small. Mostly they're 'attached' to a single user, but sometimes people work remotely on someone else's box; and there are a couple of servers. I've read this question, but my scenario is much simpler (even simpler than in this question) and I'd like to do something (relatively) quick, with not much hassle, but not a dirty totally-insecure hack. Is NIS relevant for my scenario? If not, what's the most hassle-free way to set up LDAP (or LDAP+Kerberos) to achieve the same? Notes: I have no experience with setting up either NIS or LDAP. We use Debian-flavored Linux distributions, mainly Kubuntu 12.04 (not my choice, but that's the way it is).

    Read the article

  • Is there a centralized list of country names that can be used for web drop down boxes (and validatio

    - by Thr4wn
    There are examples online with web select boxes that have a huge list of countries and that probably will be good enough for me to use. However, by Murphy's law, there's bound to be some random country that someone is from and isn't on my list (and probably someone else also ran into this and has updated their local list). Also, when new countries are added, I won't know about it. Basically, I feel it's better practice and a better smell if there is some centralized list of country names that I can use / trust. (also it could set/follow standards for exact namings "United St..." vs "USA" etc.) I would prefer a solution that isn't IIS specific if possible

    Read the article

  • Docbook: Centralized glossary, where each document includes only terms which appear in it?

    - by DanM
    Trying to figure out if this (or something similar) is possible. I'm working with a collection of technical documents, all written in DocBook. The documents each contain many acronyms, technical terms and other jargon, so we need to include a glossary with each of them. The ideal situation would be this: I have a central glossary.xml file which contains a glossentry item (or similar) for each such term; then, each of the documents uses that glossary file, but only prints out the terms which appear IN that document. So, each document has its own glossary printed at the end, but the actual glossary entries are stored centrally. Is that doable?

    Read the article

  • Servers at remote sites vs. centralized servers?

    - by Boden
    Looking for some opinions here. We've got three physical locations and site-to-site VPN between all three. Currently we've got Windows domain controllers at each location, with roughly 50 clients at each. The domains are currently separate, and we're looking at integrating the three sites. Email (Exchange) will be located at the primary site, and RPD is already being used at the secondary branches to hit the app servers also located at the primary site. The bulk of the local user load at the other two sites is just file sharing. What would the main benefits and drawbacks be of replacing the local domain controllers with NAS devices, and only keeping the domain controller(s) at the primary site? (assuming upgrades are coming regardless) Under what circumstances would you choose one setup over the other?

    Read the article

  • SSO solution and centralized user mgmt for about 10-30 Ubuntu machines?

    - by nbr
    Hello, I'm looking for a clean way to centralize user management. The setup: About 10-30 linux machines (Ubuntu 10.04 LTS server) Maybe 10-30 users for now. The requirements (hopes and expectations): A single place for the administrator to manage user accounts, passwords and the list of machines each user has access to. (And probably groups.) Doesn't have to be fancy. Single sign-on for SSH: the user should be able to login from machine A to machine B without re-entering his/her password. A Quick Google searches give me pointers to OpenLDAP and Kerberos, but I'm not sure where to start and what problem will each solution actually solve. Which way to go? I'd love to find a clear that focuses on this subject. (Or: am I asking "a wrong question"?)

    Read the article

  • What are the Best Practices and tools for managing Windows Desktops from a linux sever ?

    - by JJ
    I know this is a loaded question! What are the best ways to manage Windows (2000, XP, Vista, Win7) workstation from a centralized linux server. I would like to replace the fuctionaility of MS SBS Server with a linux box. The following issues would need to be addressed. File Sharing Authentication, Authorization, and Access Control Software Installation Centralized Login Script Centralized Backup

    Read the article

  • What at the Best Practices and tools for managing Windows Desktops from a linux sever ?

    - by JJ
    I know this is a loaded question! What are the best ways to manage Windows (2000, XP, Vista, Win7) workstation from a centralized linux server. I would like to replace the fuctionaility of MS SBS Server with a linux box. The following issues would need to be addressed. File Sharing Authentication, Authorization, and Access Control Software Installation Centralized Login Script Centralized Backup

    Read the article

  • mercurial for OS projects and svn for Enterprise projects?

    - by ajsie
    correct me if im wrong, but isn't distributed SCMs for OS projects while centralized SCMs are better for corporate/private projects? cause with eg. mercurial anyone gets an exact copy of the repository with FULL history features, while with centralized you only get the latest working copy. im more focused on private projects so i wonder if its better with centralized SCMs or doesnt it matter?

    Read the article

  • Oracle Coherence & Oracle Service Bus: REST API Integration

    - by Nino Guarnacci
    This post aims to highlight one of the features found in Oracle Coherence which allows it to be easily added and integrated inside a wider variety of projects.  The features in question are the REST API exposed by the Coherence nodes, with which you can interact in the wider mode in memory data grid.Oracle Coherence and Oracle Service Bus are natively integrated through a feature found in the Oracle Service Bus, which allows you to use the coherence grid cache during the configuration phase of a business service. This feature allows you to use an intermediate layer of cache to retrieve the answers from previous invocations of the same service, without necessarily having to invoke the real business service again. Directly from the web console of Oracle Service Bus, you can decide the policies of eviction of the objects / answers and define the discriminating parameters that identify their uniqueness.The coherence REST APIs, however, allow you to integrate both products for other necessities enabling realization of new architectures design.  Consider coherence’s node as a simple service which interoperates through the stardard services and in particular REST (with JSON and XML). Thinking of coherence as a company’s shared service, able to have an implementation of a centralized “map and reduce” which you can access  by a huge variety of protocols (transport and envelopes).An amazing step forward for those who still imagine connectors and code. This type of integration does not require writing custom code or complex implementation to be self-supported. The added value is made unique by the incredible value of both products independently, and still more out of their simple and robust integration.As already mentioned this scenario discovers a hidden new door behind the columns of these two products. The door leads to new ideas and perspectives for enterprise architectures that increasingly wink to next-generation applications: simple and dynamic, perhaps towards the mobile and web 2.0.Below, a small and simple demo useful to demonstrate how easily is to integrate these two products using the Coherence REST API. This demo is also intended to imagine new enterprise architectures using this approach.The idea is to create a centralized system of alerting, fed easily from any company’s application, regardless of the technology with which they were built . Then use a representation standard protocol: RSS, using a service exposed by the service bus; So you can browse and search only the alerts that you are interested on, by category, author, title, date, etc etc.. The steps needed to implement this system are very simple and very few. Here they are listed below and described to be easily replicated within your environment. I would remind you that the demo is only meant to demonstrate how easily is to integrate Oracle Coherence and the Oracle Service Bus, and stimulate your imagination to new technological approaches.1) Install the two products: In this demo used (if necessary, consult the installation guides of 2 products)  - Oracle Service Bus ver. 11.1.1.5.0 http://www.oracle.com/technetwork/middleware/service-bus/downloads/index.html - Oracle Coherence ver. 3.7.1 http://www.oracle.com/technetwork/middleware/coherence/downloads/index.html 2) Because you choose to create a centralized alerting system, we need to define a structure type containing some alerting attributes useful to preserve and organize the information of the various alerts sent by the different applications. Here, then it was built a java class named Alert containing the canonical properties of an alarm information:- Title- Description- System- Time- Severity 3) Therefore, we need to create two configuration files for the coherence node, in order to save the Alert objects within the grid, through the rest/http protocol (more than the native API for Java, C + +, C,. Net). Here are the two minimal configuration files for Coherence:coherence-rest-config.xml resty-server-config.xml This minimum configuration allows me to use a distributed cache named "alerts" that can  also be accessed via http - rest on the host "localhost" over port "8080", objects are of type “oracle.cohsb.Alert”. 4) Below  a simple Java class that represents the type of alert messages: 5) At this point we just need to startup our coherence node, able to listen on http protocol to manage the “alerts” cache, which will receive incoming XML or JSON objects of type Alert. Remember to include in the classpath of the coherence node, the Alert java class and the following coherence libraries and configuration files:  At this point, just run the coherence class node “com.tangosol.net.DefaultCacheServer”advising you to set the following parameters:-Dtangosol.coherence.log.level=9 -Dtangosol.coherence.log=stdout -Dtangosol.coherence.cacheconfig=[PATH_TO_THE_FILE]\resty-server-config.xml 6) Let's create a procedure to test our configuration of Coherence and in order to insert some custom alerts in our cache. The technology with which you want to achieve this functionality is fully not considerable: Javascript, Python, Ruby, Scala, C + +, Java.... Because the protocol to communicate with Coherence is simply HTTP / JSON or XML. For this little demo i choose Java: A method to send/put the alert to the cache: A method to query and view the content of the cache: Finally the main method that execute our methods:  No special library added in the classpath for our class (json struct static defined), when it will be executed, it asks some information such as title, description,... in order to compose and send an alert to the cache and then it will perform an inquiry, to the same cache. At this point, a good exercise at this point, may be to create the same procedure using other technologies, such as a simple html page containing some JavaScript code, and then using Python, Ruby, and so on.7) Now we are ready to start configuring the Oracle Service Bus in order to integrate the two products. First integrate the internal alerting system of Oracle Service Bus with our centralized alerting system based on coherence node. This ensures that by monitoring, or directly from within our Proxy Message Flow, we can throw alerts and save them directly into the Coherence node. To do this I choose to use the jms technology, natively present inside the Oracle Weblogic / Service Bus. Access to the Oracle WebLogic Administration console and create and configure a new JMS connection factory and a new jms destination (queue). Now we should create a new resource of type “alert destination” within our Oracle Service Bus project. The new “alert destination” resource should be configured using the newly created connection factory jms and jms destination. Finally, in order to withdraw the message alert enqueued in our JMS destination and send it to our coherence node, we just need to create a new business service and proxy service within our Oracle Service Bus project.Our business service is responsible for sending a message to our REST service Coherence using as a method action: PUT Finally our proxy service have to collect all messages enqueued on the destination, execute an xquery transformation on those messages  in order to translate them into valid XML / alert objects useful to be sent to our coherence service, through the newly created business service. The message flow pipeline containing the xquery transformation: Incredibly,  we just did a basic first integration between the native alerting system of Oracle Service Bus and our centralized alerting system by simply configuring our coherence node without developing anything.It's time to test it out. To do this I create a proxy service able to generate an alert using our "alert destination", whenever the proxy is invoked. After some invocation to our proxy that generates fake alerts, we could open an Internet browser and type the URL  http://localhost: 8080/alerts/  so we could see what has been inserted within the coherence node. 8) We are ready for the final step.  We would create a new message flow, that can be used to search and display the results in standard mode. To do this I choosen the standard representation of RSS, to display a formatted result on a huge variety of devices such as readers for the iPhone and Android. The inquiry may be defined already at the time of the request able to return only feed / items related to our needs. To do this we need to create a new business service, a new proxy service, and finally a new XQuery Transformation to take care of translating the collection of alerts that will be return from our coherence node in a nicely formatted RSS standard document.So we start right from this resource (xquery), which has the task of transforming a collection of alerts / xml returned from the node coherence in a type well-formatted feed RSS 2.0 our new business service that will search the alerts on our coherence node using the Rest API. And finally, our last resource, the proxy service that will be exposed as an RSS / feeds to various mobile devices and traditional web readers, in which we will intercept any search query, and transform the result returned by the business service in an RSS feed 2.0. The message flow with the transformation phase (Alert TO Feed Items): Finally some little tricks to follow during the routing to the business service, - check for any queries present in the url to require a subset of alerts  - the http header "Accept" to help get an answer XML instead of JSON: In our little demo we also static added some coherence parameters to the request:sort=time:desc;start=0;count=100I would like to get from Coherence that the results will be sorted by date, and starting from 1 up to a maximum of 100.Done!!Just incredible, our centralized alerting system is ready. Inheriting all the qualities and capabilities of the two products involved Oracle Coherence & Oracle Service Bus: - RASP (Reliability, Availability, Scalability, Performance)Now try to use your mobile device, or a normal Internet browser by accessing the RSS just published: Some urls you may test: Search for the last 100 alerts : http://localhost:7001/alarmsSearch for alerts that do not have time set to null (time is not null):http://localhost:7001/alarms?q=time+is+not+nullSearch for alerts that the system property is “Web Browser” (system = ‘Web Browser’):http://localhost:7001/alarms?q=system+%3D+%27Web+Browser%27Search for alerts that the system property is “Web Browser” and the severity property is “Fatal” and the title property contain the word “Javascript”  (system = ‘Web Broser’ and severity = ‘Fatal’ and title like ‘%Javascript%’)http://localhost:8080/alerts?q=system+%3D+%27Web+Browser%27+AND+severity+%3D+%27Fatal%27+AND+title+LIKE+%27%25Javascript%25%27 To compose more complex queries about your need I would suggest you to read the chapter in the coherence documentation inherent the Cohl language (Coherence Query Language) http://download.oracle.com/docs/cd/E24290_01/coh.371/e22837/api_cq.htm . Some useful links: - Oracle Coherence REST API Documentation http://download.oracle.com/docs/cd/E24290_01/coh.371/e22839/rest_intro.htm - Oracle Service Bus Documentation http://download.oracle.com/docs/cd/E21764_01/soa.htm#osb - REST explanation from Wikipedia http://en.wikipedia.org/wiki/Representational_state_transfer At this URL could be downloaded the whole materials of this demo http://blogs.oracle.com/slc/resource/cosb/coh-sb-demo.zip Author: Nino Guarnacci.

    Read the article

  • Git repos over multiple machines - backups and keeping in sync

    - by a-or-b
    I'm new to git so please feel free to RTFM me... I have multiple development sites (none of which can communicate via a network with each other) and am working on a few projects (with a few people) at any one time. What I would ideally have is at each site a centralized repository that can be pulled from but development would occur in our own (personal) repos. Then I would like to be able to sync across the centralized repos (via USB key for example). I want a centralized repo at each location as (1) I'm new to git and do break my (personal) local repo by playing around and (2) some projects get put on hold so I want to be able to free up disk space by deleting them. This is the "backup" part of my question. I was also hoping to be able to use 'git clone --bare' for my centralized repos (and the USB key repos to?) as we don't need the full checkout, just the git benefits. However I can't seem to get a bare repo to work as repo I can push from. I've used 'git remote' to set up an remote origin (similar to http://toolmantim.com/thoughts/setting_up_a_new_remote_git_repository) but I can't get 'git push' to work - it seems I need a checked-out repo. . Does anyone else use this sort of repo/development structure or is there something fundamental about git usage that I'm missing? . A solution that I thought about that might not work - If I had a 'git clone --bare' at each site and then use a git repo on my removable media which has remotes set up for each site then I could ('pull') sync my USB key with each repo. But then can I update the site repo from my USB key? Could I push from USB?

    Read the article

  • Advice on designing a robust program to handle a large library of meta-information & programs

    - by Sam Bryant
    So this might be overly vague, but here it is anyway I'm not really looking for a specific answer, but rather general design principles or direction towards resources that deal with problems like this. It's one of my first large-scale applications, and I would like to do it right. Brief Explanation My basic problem is that I have to write an application that handles a large library of meta-data, can easily modify the meta-data on-the-fly, is robust with respect to crashing, and is very efficient. (Sorta like the design parameters of iTunes, although sometimes iTunes performs more poorly than I would like). If you don't want to read the details, you can skip the rest Long Explanation Specifically I am writing a program that creates a library of image files and meta-data about these files. There is a list of tags that may or may not apply to each image. The program needs to be able to add new images, new tags, assign tags to images, and detect duplicate images, all while operating. The program contains an image Viewer which has tagging operations. The idea is that if a given image A is viewed while the library has tags T1, T2, and T3, then that image will have boolean flags for each of those tags (depending on whether the user tagged that image while it was open in the Viewer). However, prior to being viewed in the Viewer, image A would have no value for tags T1, T2, and T3. Instead it would have a "dirty" flag indicating that it is unknown whether or not A has these tags or not. The program can introduce new tags at any time (which would automatically set all images to "dirty" with respect to this new tag) This program must be fast. It must be easily able to pull up a list of images with or without a certain tag as well as images which are "dirty" with respect to a tag. It has to be crash-safe, in that if it suddenly crashes, all of the tagging information done in that session is not lost (though perhaps it's okay to loose some of it) Finally, it has to work with a lot of images (10,000) I am a fairly experienced programmer, but I have never tried to write a program with such demanding needs and I have never worked with databases. With respect to the meta-data storage, there seem to be a few design choices: Choice 1: Invidual meta-data vs centralized meta-data Individual Meta-Data: have a separate meta-data file for each image. This way, as soon as you change the meta-data for an image, it can be written to the hard disk, without having to rewrite the information for all of the other images. Centralized Meta-Data: Have a single file to hold the meta-data for every file. This would probably require meta-data writes in intervals as opposed to after every change. The benefit here is that you could keep a centralized list of all images with a given tag, ect, making the task of pulling up all images with a given tag very efficient

    Read the article

  • SAP Applications Run Better on Oracle Exadata

    - by jgelhaus
    To yield the results necessary to stay competitive, your business-critical applications must be able to access the most reliable and up-to-date information. That’s why a growing number of SAP application customers are turning to Oracle Exadata Database Machine for better performance, better productivity—and big savings. Watch our latest Webcast to find out why Oracle Exadata is the ideal platform for running your SAP applications. You’ll learn how you can: Increase the performance of SAP applications Enhance reliability with a centralized, scalable platform Ensure quick, safe, and easy deployments Watch it now. Highlights include customer case studies and practical deployment strategies. Watch our latest on-demand Webcast to find out why Oracle Exadata is the ideal platform for running your SAP applications. Learn how to increase the performance of SAP applications, enhance reliability with a centralized, scalable platform and ensure quick, safe and easy deployments.

    Read the article

  • SO-Aware @ TechReady (Microsoft Event)

    - by SURESH GIRIRAJAN
    A session on SO-Aware is presented at Microsoft TechReady event this week check here for more details : http://tellagostudios.com/blog/so-aware-highlighted-microsoft-techready Check here for more details on SO-Aware and how to leverage within your enterprise if you’re using BizTalk Server, WCF Services and services build on Azure. It provides lot of capability such as: o    Centralized service repository o    Centralized configuration management o    Service testing o    Monitoring o    Transparent integration with technologies such as Visual Studio, BizTalk Server, Windows Server & Azure AppFabric among many others o    SO-Aware Test Workbench provides developers with a visually rich environment to model and control the execution of load and functional tests in a SOA infrastructure. This tool includes the first native WCF load testing engine allowing developers to transparently load test applications built on Microsoft's service oriented technologies such as WCF, BizTalk Server or the Windows Server or Azure AppFabric.

    Read the article

  • Manageable Services

    This article describes a design, implementation and tooling of model driven WorkflowServices logically centralized in the Repository and physically decentralized for their runtime projecting.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >