Search Results

Search found 10023 results on 401 pages for 'manage processes'.

Page 332/401 | < Previous Page | 328 329 330 331 332 333 334 335 336 337 338 339  | Next Page >

  • Best way to enforce inter-table constraints inside database

    - by FerranB
    I looking for the best way to check for inter-table constraints an step forward of foreing keys. For instance, to check if a date child record value is between a range date on two parent rows columns. For instance: Parent table ID DATE_MIN DATE_MAX ----- ---------- ---------- 1 01/01/2009 01/03/2009 ... Child table PARENT_ID DATE ---------- ---------- 1 01/02/2009 1 01/12/2009 <--- HAVE TO FAIL! ... I see two approaches: Create materialized views on-commit as shown in this article (or other equivalent on other RDBMS). Use stored-procedures and triggers. Any other approach? Which is the best option? UPDATE: The motivation of this question is not about "putting the constraints on database or on application". I think this is a tired question and anyone does the way she loves. And, I'm sorry for detractors, I'm developing with constraints on database. From here, the question is "which is the best option to manage inter-table constraints on database?". I'm added "inside database" on the question title. UPDATE 2: Some one added the "oracle" tag. Of course materialized views are oracle-tools but I'm interested on any option regardless it's on oracle or others RDBMSs.

    Read the article

  • Moving a ASP.NET application to the cloud

    - by user102533
    I am new to cloud computing, so please bear with me here. I have an existing ASP.NET application with SQL Server 2008 hosted on a Virtual Private Server. Here's what it briefly does: The front end accepts user's requests and adds them to a DB table A Windows Service running in the background picks up the request, processes it and sets a flag. The Windows Services also creates a file for the user to download. User downloads file I'd like to move this web application with the service to the cloud. The architecture I envision is that I'll have 1 Web server in which I will install the front end and the windows service. I'll also have a cloud files server for file storage. The windows service should somehow create a file and transfer it to the cloud file server (I assume this is possible?) My questions: Does the architecture look like I am going in the right direction? I know Amazon has been providing cloud services for a long time. If I want to do minimal changes to my application, should I go with Amazon, Rackspace, Azure or some other provider? I understand that I would not only pay for file storage and web server but also for the bandwidth of users downloading the file and the windows servic uploading the file to the cloud server. Can I assume these costs are negligible? Should I go with VPS + Cloud Files combination to begin with? Any other thoughts/suggestions?

    Read the article

  • stack overflow on XMLListCollection collectionEvent

    - by reidLinden
    I'm working on a Flex 3 project, and I'm using a pair of XMLListCollection(s) to manage a combobox and a data grid. The combobox piece is working perfectly. The XMLListCollection for this is static. The user picks an item, and, on "change", it fires off an addItem() to the second collection. The second collection's datagrid then displays the updated list, and all is well. The datagrid, however, is editable. A further complication is that I have another event handler bound to the second XMLLIstCollection's "change" event, and in that handler, I do make additional changes to the second list. This essentially causes an infinite loop (a stack overflow :D ), of the second lists "change" handler. I'm not really sure how to handle this. Searching has brought up an idea or two regarding AutoUpdate functionality, but I wasn't able to get much out of them. In particular, the behavior persists, executing the 'updates' as soon as I re-enable, so I imagine I may be doing it wrong. I want the update to run, in general, just not DURING that code block. Thanks for your help!

    Read the article

  • Why is iPdb not displaying STOUT after my input?

    - by BryanWheelock
    I can't figure out why ipdb is not displaying stout properly. I'm trying to debug why a test is failing and so I attempt to use ipdb debugger. For some reason my Input seems to be accepted, but the STOUT is not displayed until I (c)ontinue. Is this something broken in ipdb? It makes it very difficult to debug a program. Below is an example ipdb session, notice how I attempt to display the values of the attributes with: user.is_authenticated() user_profile.reputation user.is_superuser The results are not displayed until 'begin captured stdout ' In [13]: !python manage.py test Creating test database... < SNIP remove loading tables nosetests ...E.. /Users/Bryan/work/APP/forum/auth.py(93)can_retag_questions() 92 import ipdb; ipdb.set_trace() ---> 93 return user.is_authenticated() and ( 94 RETAG_OTHER_QUESTIONS <= user_profile.reputation < EDIT_OTHER_POSTS or user.is_authenticated() user_profile.reputation user.is_superuser c F /Users/Bryan/work/APP/forum/auth.py(93)can_retag_questions() 92 import ipdb; ipdb.set_trace() ---> 93 return user.is_authenticated() and ( 94 RETAG_OTHER_QUESTIONS <= user_profile.reputation < EDIT_OTHER_POSTS or c .....EE...... FAIL: test_can_retag_questions (APP.forum.tests.test_views.AuthorizationFunctionsTestCase) Traceback (most recent call last): File "/Users/Bryan/work/APP/../APP/forum/tests/test_views.py", line 71, in test_can_retag_questions self.assertTrue(auth.can_retag_questions(user)) AssertionError: -------------------- begin captured stdout << --------------------- ipdb True ipdb 4001 ipdb False ipdb --------------------- end captured stdout << ---------------------- Ran 20 tests in 78.032s FAILED (errors=3, failures=1) Destroying test database... In [14]: Here is the actual test I'm trying to run: def can_retag_questions(user): """Determines if a User can retag Questions.""" user_profile = user.get_profile() import ipdb; ipdb.set_trace() return user.is_authenticated() and ( RETAG_OTHER_QUESTIONS <= user_profile.reputation < EDIT_OTHER_POSTS or user.is_superuser) I've also tried to use pdb, but that doesn't display anything. I see my test progress .... , and then nothing and not responsive to keyboard input. Is this a problem with readline?

    Read the article

  • Hibernate : Opinions in Composite PK vs Surrogate PK

    - by Albert Kam
    As i understand it, whenever i use @Id and @GeneratedValue on a Long field inside JPA/Hibernate entity, i'm actually using a surrogate key, and i think this is a very nice way to define a primary key considering my not-so-good experiences in using composite primary keys, where : there are more than 1 business-value-columns combination that become a unique PK the composite pk values get duplicated across the table details cannot change the business value inside that composite PK I know hibernate can support both types of PK, but im left wondering by my previous chats with experienced colleagues where they said that composite PK is easier to deal with when doing complex SQL queries and stored procedure processes. They went on saying that when using surrogate keys will complicate things when doing joining and there are several condition when it's impossible to do some stuffs when using surrogate keys. Although im sorry i cant explain the detail here since i was not clear enough when they explain it. Maybe i'll put more details next time. Im currently trying to do a project, and want to try out surrogate keys, since it's not getting duplicated across tables, and we can change the business-column values. And when the need for some business value combination uniqueness, i can use something like : @Table(name="MY_TABLE", uniqueConstraints={ @UniqueConstraint(columnNames={"FIRST_NAME", "LAST_NAME"}) // name + lastName combination must be unique But im still in doubt because of the previous discussion about the composite key. Could you share your experiences in this matter ? Thank you !

    Read the article

  • How to avoid chaotic ASP.NET web application deployment?

    - by emzero
    Ok, so here's the thing. I'm developing an existing (it started being an ASP classic app, so you can imagine :P) web application under ASP.NET 4.0 and SQLServer 2005. We are 4 developers using local instances of SQL Server 2005 Express, having the source-code and the Visual Studio database project This webapp has several "universes" (that's how we call it). Every universe has its own database (currently on the same server) but they all share the same schema (tables, sprocs, etc) and the same source/site code. So manually deploying is really annoying, because I have to deploy the source code and then run the sql scripts manually on each database. I know that manual deploying can cause problems, so I'm looking for a way of automating it. We've recently created a Visual Studio Database Project to manage the schema and generate the diff-schema scripts with different targets. I don't have idea how to put the pieces together I would like to: Have a way to make a "sync" deploy to a target server (thanksfully I have full RDC access to the servers so I can install things if required). With "sync" deploy I mean that I don't want to fully deploy the whole application, because it has lots of files and I just want to deploy those new or changed. Generate diff-sql update scripts for every database target and combine it to just 1 script. For this I should have some list of the databases names somewhere. Copy the site files and executing the generated sql script in an easy and automated way. I've read about MSBuild, MS WebDeploy, NAnt, etc. But I don't really know where to start and I really want to get rid of this manual deploy. If there is a better and easier way of doing it than what I enumerated, I'll be pleased to read your option. I know this is not a very specific question but I've googled a lot about it and it seems I cannot figure out how to do it. I've never used any automation tool to deploy. Any help will be really appreciated, Thank you all, Regards

    Read the article

  • Throttling outbound API calls generated by a Rails app

    - by Sharpie
    I am not a professional web developer, but I like to wrench on websites as a hobby. Recently, I have been playing with developing a Rails app as a project to help me learn the framework. The goal of my toy app is to harvest data from another service through their API and make it available for me to query using a search function. However, the service I want to pull data from imposes a rate limit on the number of API calls that may be executed per minute. I plan on having my app run a daily update which may generate a burst of API calls that far exceeds the limit provided by the external service. I wish to respect the performance of the external site and so would like to throttle the rate at which my app executes the calls. I have done a little bit of searching and the overwhelming amount of tutorial material and pre-built libraries I have found cover throttling inbound API calls to a web app and I can find little discussion of controlling the flow of outbound calls. Being both an amateur web developer and a rails newbie, it is entirely possible that I have been executing the wrong searches in the wrong places. Therefore my questions are: Is there a nice website out there aggregating Rails tutorials that has material related to throttling outbound API requests? Are there any ruby gems or other libraries that would help me throttle the requests? I have some ideas of how I might go about writing a throttling system using a queue-based worker like DelayedJob or Resque to manage the API calls, but I would rather spend my weekends building the rest of the site if there is a good pre-built solution out there already.

    Read the article

  • Trouble with LINQ databind to GridView and RowDataBound

    - by Michael
    Greetings all, I am working on redesigning my personal Web site using VS 2008 and have chosen to use LINQ to create by data-access layer. Part of my site will be a little app to help manage my budget better. My first LINQ query does successfully execute and display in a GridView, but when I try to use a RowDataBound event to work with the results and refine them a bit, I get the error: The type or namespace name 'var' could not be found (are you missing a using directive or an assembly reference?) This interesting part is, if I just try to put in a var s = "s"; anywhere else in the same file, I get the same error too. If I go to other files in the web project, var s = "s"; compiles fine. Here is the LINQ Query call: public static IQueryable pubGetRecentTransactions(int param_accountid) { clsDataContext db; db = new clsDataContext(); var query = from d in db.tblMoneyTransactions join p in db.tblMoneyTransactions on d.iParentTransID equals p.iTransID into dp from p in dp.DefaultIfEmpty() where d.iAccountID == param_accountid orderby d.dtTransDate descending, d.iTransID ascending select new { d.iTransID, d.dtTransDate, sTransDesc = p != null ? p.sTransDesc : d.sTransDesc, d.sTransMemo, d.mTransAmt, d.iCheckNum, d.iParentTransID, d.iReconciled, d.bIsTransfer }; return query; } protected void Page_Load(object sender, EventArgs e) { if (!this.IsPostBack) { this.prvLoadData(); } } internal void prvLoadData() { prvCtlGridTransactions.DataSource = clsMoneyTransactions.pubGetRecentTransactions(2); prvCtlGridTransactions.DataBind(); } protected void prvCtlGridTransactions_RowDataBound(object sender, GridViewRowEventArgs e) { if (e.Row.RowType == DataControlRowType.DataRow) { var datarow = e.Row.DataItem; var s = "s"; e.Row.Cells[0].Text = datarow.dtTransDate.ToShortDateString(); e.Row.Cells[1].Text = datarow.sTransDesc; e.Row.Cells[2].Text = datarow.mTransAmt.ToString("c"); e.Row.Cells[3].Text = datarow.iReconciled.ToString(); }//end if }//end RowDataBound My googling to date hasn't found a good answer, so I turn it over to this trusted community. I appreciate your time in assisting me.

    Read the article

  • Open source gravatar-like implementations?

    - by Tauren
    I'm already using gravatar icons for the users of my web service. However, I'm finding several problems with this approach: Only a small percentage of the users take the time to set up a gravatar profile. My users are not tech-savvy, but would be likely to add a dedicated photo to my site. Users of my service are encouraged to use images that depict them in proper uniform for the industry my service relates to. They wouldn't want that same picture to be used for personal purposes throughout the internet. They would not take the time or effort to manage a separate email address and gravatar account just to have an "in-uniform" profile photo for my service. Before I implement my own profile image feature, I was wondering if there are any open-source solutions that I could leverage with similar features to gravatar. Specifically: The ability to display any size thumbnail (up to 512px would be fine) Takes care of caching different sized thumbnails Has support for something like identicons, preferably pluggable with different style algorithms (monsters, etc.), even better if I can customize these Ability to fall-back to gravatar if no photo found Does anything like this exist? I haven't found it yet if it does.

    Read the article

  • eventmachine and external scripts via backticks

    - by Maciek
    I have a small HTTP server script I've written using eventmachine which needs to call external scripts/commands and does so via backticks (``). When serving up requests which don't run backticked code, everything is fine, however, as soon as my EM code executes any backticked external script, it stops serving requests and stops executing in general. I noticed eventmachine seems to be sensitive to sub-processes and/or threads, and appears to have the popen method for this purpose, but EM's source warns that this method doesn't work under Windows. Many of the machines running this script are running Windows, so I can't use popen. Am I out of luck here? Is there a safe way to run an external command from an eventmachine script under Windows? Is there any way I could fire off some commands to be run externally without blocking EM's execution? edit: the culprit that seems to be screwing up EM the most is my usage of the Windows start command, as in: start java myclass. The reason I'm using start is because I want those external scripts to start running and keep running after the EM request is served

    Read the article

  • EPIPE blocks server

    - by timn
    I have written a single-threaded asynchronous server in C running on Linux: The socket is non-blocking and as for polling, I am using epoll. Benchmarks show that the server performs fine and according to Valgrind, there are no memory leaks or other problems. The only problem is that when a write() command is interrupted (because the client closed the connection), the server will encounter a SIGPIPE. I am doing the interrupted artifically by running the benchmarking utility "siege" with the parameter -b. It does lots of requests in a row which all work perfectly. Now I press CTRL-C and restart the "siege". Sometimes I am lucky and the server does not manage to send the full response because the client's fd is invalid. As expected errno is set to EPIPE. I handle this situation, execute close() on the fd and then free the memory related to the connection. Now the problem is that the server blocks and does not answer properly anymore. Here is the strace output: accept(3, {sa_family=AF_INET, sin_port=htons(50611), sin_addr=inet_addr("127.0.0.1")}, [16]) = 5 fcntl64(5, F_GETFD) = 0 fcntl64(5, F_SETFL, O_RDONLY|O_NONBLOCK) = 0 epoll_ctl(4, EPOLL_CTL_ADD, 5, {EPOLLIN|EPOLLERR|EPOLLHUP|EPOLLET, {u32=158310248, u64=158310248}}) = 0 epoll_wait(4, {{EPOLLIN, {u32=158310248, u64=158310248}}}, 128, -1) = 1 read(5, "GET /user/register HTTP/1.1\r\nHos"..., 4096) = 161 write(5, "HTTP/1.1 200 OK\r\nContent-Type: t"..., 106) = 106 <<<<< write(5, "00001000\r\n", 10) = -1 EPIPE (Broken pipe) <<<<< Why did the previous write() work fine but not this one? --- SIGPIPE (Broken pipe) @ 0 (0) --- As you can see, the client establishes a new connection which consequently is accepted. Then, it's added to the EPOLL queue. epoll_wait() signalises that the client sent data (EPOLLIN). The request is parsed and and a response is composed. Sending the headers works fine but when it comes to the body, write() results in an EPIPE. It is not a bug in "siege" because it blocks any incoming connections, no matter from which client.

    Read the article

  • Good patterns for loose coupling in Java?

    - by Eye of Hell
    Hello. I'm new to java, and while reading documentation so far i can't find any good ways for programming with loose coupling between objects. For majority of languages i know (C++, C#, python, javascript) i can manage objects as having 'signals' (notification about something happens/something needed) and 'slots' (method that can be connected to signal and process notification/do some work). In all mentioned languages i can write something like this: Object1 = new Object1Class(); Object2 = new Object2Class(); Connect( Object1.ItemAdded, Object2.OnItemAdded ); Now if object1 calls/emits ItemAdded, the OnItemAdded method of Object2 will be called. Such loose coupling technique is often referred as 'delegates', 'signal-slot' or 'inversion of control'. Compared to interface pattern, technique mentioned don't need to group signals into some interfaces. Any object's methods can be connected to any delegate as long as signatures match ( C++Qt even extends this by allowing only partial signature match ). So i don't need to write additional interface code for each methods / groups of methods, provide default implementation for interface methods not used etc. And i can't see anything like this in Java :(. Maybe i'm looking a wrong way?

    Read the article

  • Locating memory leak in Apache httpd process, PHP/Doctrine-based application

    - by Sam
    I have a PHP application using these components: Apache 2.2.3-31 on Centos 5.4 PHP 5.2.10 Xdebug 2.0.5 with Remote Debugging enabled APC 3.0.19 Doctrine ORM for PHP 1.2.1 using Query Caching and Results Caching via APC MySQL 5.0.77 using Query Caching I've noticed that when I start up Apache, I eventually end up 10 child processes. As time goes on, each process will grow in memory until each one approaches 10% of available memory, which begins to slow the server to a crawl since together they grow to take up 100% of memory. Here is a snapshot of my top output: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1471 apache 16 0 626m 201m 18m S 0.0 10.2 1:11.02 httpd 1470 apache 16 0 622m 198m 18m S 0.0 10.1 1:14.49 httpd 1469 apache 16 0 619m 197m 18m S 0.0 10.0 1:11.98 httpd 1462 apache 18 0 622m 197m 18m S 0.0 10.0 1:11.27 httpd 1460 apache 15 0 622m 195m 18m S 0.0 10.0 1:12.73 httpd 1459 apache 16 0 618m 191m 18m S 0.0 9.7 1:13.00 httpd 1461 apache 18 0 616m 190m 18m S 0.0 9.7 1:14.09 httpd 1468 apache 18 0 613m 190m 18m S 0.0 9.7 1:12.67 httpd 7919 apache 18 0 116m 75m 15m S 0.0 3.8 0:19.86 httpd 9486 apache 16 0 97.7m 56m 14m S 0.0 2.9 0:13.51 httpd I have no long-running scripts (they all terminate eventually, the longest being maybe 2 minutes long), and I am working under the assumption that once each script terminates, the memory it uses gets deallocated. (Maybe someone can correct me on that). My hunch is that it could be APC, since it stores data between requests, but at the same time, it seems weird that it would store data inside the httpd process. How can I track down which part of my app is causing the memory leak? What tools can I use to see how the memory usage is growing inside the httpd process and what is contributing to it?

    Read the article

  • How can I track down "Template process failed: undef error" in Perl's Template Toolkit?

    - by swisstony
    I've moved a Perl CGI app from one web host to another. Everything's running fine except for Template Tookit which is giving the following error: "Template process failed: undef error - This shouldn't happen at /usr/lib/perl5/5.8.8/CGI/Carp.pm line 314." The templates are working fine on the other web host. I've set the DEBUG_ALL flag when creating the template object, but it doesn't provide any additional info about errors just loads of debug output. I can't post the template source as there's lots of client specific stuff in it. I've written a simple test template and that works okay. Just wondering if anyone had seen this error before or has any ideas on the quickest way to find a fix for it. EDIT: Here's a snippet of the code that loads and processes the template. my $vars = {}; $vars->{page_url} = $page_url; $vars->{info} = $info; $vars->{is_valid} = 0; $vars->{invalid_input} = 0; $vars->{is_warnings} = 0; $vars->{is_invalid_price} = 0; $vars->{output_from_proc} = $proc_output; ... my $file = 'clientTemplate.html'; #create ref to hash use Template::Constants qw( :debug ); my $template = Template->new( { DEBUG => DEBUG_SERVICE | DEBUG_CONTEXT | DEBUG_PROVIDER | DEBUG_PLUGINS | DEBUG_FILTERS | DEBUG_PARSER | DEBUG_DIRS, EVAL_PERL => 1, INCLUDE_PATH => [ '/home/perlstuff/templates', ], } ); $template->process( $file, $vars ) || die "Template process failed: ", $template->error(), "\n";

    Read the article

  • Where to put data management rules for complex data validation in ASP.NET MVC?

    - by TheRHCP
    Hello, I am currently working on an ASP.NET MVC2 project. This is the first time I am working on a real MVC web application. The ASP.NET MVC website really helped me to get started really fast, but I still have some obscure knowledge concerning datamodel validation. My problem is that I do not really know where to manage my filled datamodel when it comes to complex validation rules. For example, validating a string field with a Regex is quite easy and I know that I just have to decorate my field with a specific attribute, so data management rules are implemented in the model. But if I have multiple fields that I need to validate which each other, for example multiple datetime that need to be correctly set following a specific time rule, where do I need to validate them? I know that I could create my own validation attributes, but sometimes validation ask a specific validation path which is to complex to be validated using attributes. This first question also leads me to a related question which is, is it right to validate a model in the controller? Because for the moment that is the only way I found for complex validation. But I find this a bit dirty and I feel it does not really fit a the controller role and much harder to test (multiple code path). Thanks.

    Read the article

  • BlackBerry:Reduce space between 2 buttons in HorizontalFieldManager

    - by user469999
    hi In blackberry i have created a horizontal field managar and added some buttons of small size to it to display toolbar at the bottom of the screen.But my problem is there is too much space between the 2 buttons.I have to reduce this space between 2 buttons so that i can manage to place atleast 6 buttons at the bottom of the screen.I am using the BFmsg.setMargin(305,0,0,40) statement. can anyone please help me on this. Following is my code : BFcontacts = new ButtonField("Cnt") { protected void paint(Graphics graphics) { //Bitmap contactsbitmap = Bitmap.getBitmapResource("contacts.jpg"); //graphics.drawBitmap(0, 0, contactsbitmap.getWidth(), contactsbitmap.getHeight(), contactsbitmap, 0, 0); graphics.setColor(Color.WHITE); graphics.drawText("Cnt",0,0); } }; BFcontacts.setMargin(305,0,0,10);//vertical pos,0,0,horizontal pos HFM.add(BFcontacts); BFmsg = new ButtonField("Msgs") { protected void paint(Graphics graphics) { //Bitmap msgsbitmap = Bitmap.getBitmapResource("messages.jpg"); //graphics.drawBitmap(0, 0, msgsbitmap.getWidth(), msgsbitmap.getHeight(), msgsbitmap, 0, 0); graphics.setColor(Color.WHITE); graphics.drawText("Msgs",0,0); } }; BFmsg.setMargin(305,0,0,40);//vertical pos,0,0,horizontal pos : original HFM.add(BFmsg); add(HFM) Thanks in advance

    Read the article

  • Django-South introspection rule doesn't work.

    - by Ory Band
    I'm using Django 1.2.3 and South 0.7.3. I am trying to convert my app (named core) to use Django-South. I have a custom model/field that I'm using, named ImageWithThumbsField. It's basically just the ol' django.db.models.ImageField with some attributes such as height, weight, etc. While trying to ./manage.py convert_to_auth core I receieve South's freezing errors. I have no idea why, I'm Probably missing something... I am using a simple custom Model: from django.db.models import ImageField class ImageWithThumbsField(ImageField): def __init__(self, verbose_name=None, name=None, width_field=None, height_field=None, sizes=None, **kwargs): self.verbose_name=verbose_name self.name=name self.width_field=width_field self.height_field=height_field self.sizes = sizes super(ImageField, self).__init__(**kwargs) And this is my introspection rule, which I add to the top of my models.py: from south.modelsinspector import add_introspection_rules from lib.thumbs import ImageWithThumbsField add_introspection_rules( [ ( (ImageWithThumbsField, ), [], { "verbose_name": ["verbose_name", {"default": None}], "name": ["name", {"default": None}], "width_field": ["width_field", {"default": None}], "height_field": ["height_field", {"default": None}], "sizes": ["sizes", {"default": None}], }, ), ], ["^core/.fields/.ImageWithThumbsField",]) This is the errors I receieve: ! Cannot freeze field 'core.additionalmaterialphoto.photo' ! (this field has class lib.thumbs.ImageWithThumbsField) ! Cannot freeze field 'core.material.photo' ! (this field has class lib.thumbs.ImageWithThumbsField) ! Cannot freeze field 'core.material.formulaimage' ! (this field has class lib.thumbs.ImageWithThumbsField) ! South cannot introspect some fields; this is probably because they are custom ! fields. If they worked in 0.6 or below, this is because we have removed the ! models parser (it often broke things). ! To fix this, read http://south.aeracode.org/wiki/MyFieldsDontWork Does anybody know why? What am I doing wrong?

    Read the article

  • Using a version control system as a data backend

    - by JacobM
    I'm involved in a project that, among other things, involves storing edits and changes to a large hierarchical document (HTML-formatted text). We want to include versioning of textual changes and of structural changes. Currently we're maintaining the tree of document sections in a relational database, but as we start working on how to manage versioning of structural changes, it's clear that we're in danger of having to write a lot of the functionality that a version control system provides. We don't want to reinvent the wheel. Is it possible that we could use an existing version control system as the data store, at least for the document itself? Presumably we could do so by writing out new versions to the filesystem, and keeping that directory under version control (and programmatically doing commits and so forth) but it would be better if we could directly interact with the repository via code. The VCS that we are most familiar with is Subversion, but I'm not thrilled with how Subversion represents changes to the directory structure -- it would be nice if we could see that a particular revision included moving a section from Chapter 2 to Chapter 6, rather than just seeing a new version of the tree. This sounds more like the way a system like Mercurial handles changes to the structure. Any advice? Do VCS's have public APIs and so forth? The project is in Java (with Spring) if it matters.

    Read the article

  • File-level filesystem change notification in Mac OS X

    - by Paul J. Lucas
    I want my code to be notified when any file under (either directly or indirectly) a given directory is modified. By "modified", I mead I want my code to be notified whenever a file's contents are altered, it's renamed, or it's deleted; or if a new file is added. For my application, there can be thousands of files. I looked as FSEvents, but its Technology Overview says, in part: The important point to take away is that the granularity of notifications is at a directory level. It tells you only that something in the directory has changed, but does not tell you what changed. It also says: The file system events API is also not designed for finding out when a particular file changes. For such purposes, the kqueues mechanism is more appropriate. However, in order to use kqueue on a given file, one has to open the file to obtain a file descriptor. It's impractical to manage thousands of file descriptors (and would probably exceed the maximum allowable number of open file descriptors anyway). Curiously, under Windows, I can use the ReadDirectoryChangesW() function and it does precisely what I want. So how can one do what I want under Mac OS X? Or, asked another way: how would one go about writing the equivalent of ReadDirectoryChangesW() for Mac OS X in user-space (and do so very efficiently)?

    Read the article

  • Does Team Foundation support cross-app workitem groups?

    - by drachenstern
    We're currently using Visual Source Safe and BugNet and looking to migrate up and away from VSS. I've been pushing for either SVN ( a) we're an ASP.NET shop, b) DCVS is not an option - no matter how much I like Hg ;-) or TFS. Well we finally got a new dev server, so I talked the boss into installing TFS on it (30 day trial). In the meantime, we had started experimenting with FogBugz. We really like FogBugz for about 80% of what we want to do, and the other 20% is probably stuff that we don't know what we want. I'm pushing for TFS because it allows for IDE integrated (mostly) everything. He's pushing for FogBugz because he can group tasks by customer and then project and manage everything from one dashboard. (which means I lose most of my IDE integration - no huge loss I agree) Does TFS support a single dashboard that would span all our solutions (in this case each solution is a full app that we sell to a vertical market client) and let us assign workitems to each solution-spanning-group? So for instance I think we envision something like this: PROJECT1 - Bugtracker and workitems PROJECT2 - Bugtracker and workitems PROJECT3 - Bugtracker and workitems CUSTOMER1 - Deployment schedules, required features, specific notes (Uses PROJECT1, PROJECT2) CUSTOMER2 - Deployment schedules, required features, specific notes (Uses PROJECT2, PROJECT3) CUSTOMER3 - Deployment schedules, required features, specific notes (Uses PROJECT1, PROJECT3) Hopefully that makes sense. naturally it's more complicated than this but I think I've given the details enough to paint a picture. I offered the option of creating dummy projects per customer but he doesn't like that and it doesn't really give us the single dashboard view that we're hoping to end up with (and that FogBugz as we've sorta implmented things does do now). Has anyone got a good suggestion on a management app that would accomplish what both of us want?

    Read the article

  • SQL Server 2008: Comparing similar records - Need to still display an ID for a record when the JOIN has no matches

    - by aleppke
    I'm writing a SQL Server 2008 report that will compare genetic test results for animals. A genetic test consists of an animalId, a gene and a result. Not all animals will have the same genes tested but I need to be able to display the results side-by-side for a given set of animals and only include the genes that are present for at least one of the selected animals. My TestResult table has the following data in it: animalId gene result 1 a CC 1 b CT 1 d TT 2 a CT 2 b CT 2 c TT 3 a CT 3 b TT 3 c CC 3 d CC 3 e TT I need to generate a result set that looks like the following. Note that Animal 3 is not being displayed (user doesn't want to see its results) and neither are results for Gene "e" since neither Animal 1 nor Animal 2 have a result for that gene: SireID SireResult CalfID CalfResult Gene 1 CC 2 CT a 1 CT 2 CT b 1 NULL 2 TT c 1 TT 2 NULL d But I can only manage to get this: SireID SireResult CalfID CalfResult Gene 1 CC 2 CT a 1 CT 2 CT b NULL NULL 2 TT c 1 TT NULL NULL d This is the query I'm using. SELECT sire.animalId AS 'SireID' ,sire.result AS 'SireResult' ,calf.animalId AS 'CalfID' ,calf.result AS 'CalfResult' ,sire.gene AS 'Gene' FROM (SELECT s.animalId ,s.result ,m1.gene FROM (SELECT [animalId ] ,result ,gene FROM TestResult WHERE animalId IN (1)) s FULL JOIN (SELECT DISTINCT gene FROM TestResult WHERE animalId IN (1, 2)) m1 ON s.marker = m1.marker) sire FULL JOIN (SELECT c.animalId ,c.result ,m2.gene FROM (SELECT animalId ,result ,gene FROM TestResult WHERE animalId IN (2)) c FULL JOIN (SELECT DISTINCT gene FROM TestResult WHERE animalId IN (1, 2)) m2 ON c.gene = m2.gene) calf ON sire.gene = calf.gene How do I get the SireIDs and CalfIDs to display their values when they don't have a record associated with a particular Gene? I was thinking of using COALESCE but I can't figure out how to specify the correct animalId to pass in. Any help would be appreciated.

    Read the article

  • iPhone SDK: How do you download video files to the Document Directory and then play them?

    - by Jessica
    I've been fooling around with code for ages on this one, I would be very grateful if someone could provide a code sample that downloaded this file from a server http://www.archive.org/download/june_high/june_high_512kb.mp4, (By the way it's not actually this file, it's just a perfect example for anyone trying to help me) and then play it from the documents directory. I know it seems lazy of me to ask this but I have tried so many different variations of NSURLConnection that it's driving me crazy. Also, if I did manage to get the video file downloaded would I be correct in assuming this code would then successfully play it: NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString *documentsDirectory = [paths objectAtIndex:0]; NSString *path = [documentsDirectory stringByAppendingPathComponent:@"june_high_512kb.mp4"]; NSURL *movieURL = [NSURL fileURLWithPath:path]; self.theMovie = [[MPMoviePlayerController alloc] initWithContentURL:movieURL]; [_theMovie play]; If the above code would work in playing a video file from the document directory, then I guess the only thing I would need to know is, how to download a video file from a server. Which is what seems to be my major problem. Any help is greatly appreciated.

    Read the article

  • Strategies for serializing an object for auditing/logging purpose in .NET?

    - by Jiho Han
    Let's say I have an application that processes messages. Messages are just objects in this case that implements IMessage interface which is just a marker. In this app, if a message fails to process, then I want to log it, first of all for auditing and troubleshooting purposes. Secondly I might want to use it for re-processing. Ideally, I want the message to be serialized into a format that is human-readable. The first candidate is XML although there are others like JSON. If I were to serialize the messages as XML, I want to know whether the message object is XML-serializable. One way is to reflect on the type and to see if it has a parameter-less constructor and the other is to require IXmlSerializable interface. I'm not too happy with either of these approaches. There is a third option which is to try to serialize it and catch exceptions. This doesn't really help - I want to, in some way, stipulate that IMessage (or a derived type) should be xml-serializable. The reflection route has obvious disadvantages such as security, performance, etc. IXmlSerializable route locks down my messages to one format, when in the future, I might want to change the serialization format to be JSON. The other thing is even the simplest objects now must implement ReadXml and WriteXml methods. Is there a route that involves the least amount of work that lets me serialize an arbitrary object (as long as it implements the marker interface) into XML but not lock future messages into XML?

    Read the article

  • How to connect remote EJB module from application client

    - by Zeck
    Hi guys, I have a EJB module in remote Glassfish server and application client in my computer. I want to connect from the application client to the remote EJB. Here is the my EJB interface: @Remote public interface BookEJBRemote { public String getTitle(); } Here is the my ejb: @Stateless public class BookEJB implements BookEJBRemote { @Override public String getTitle() { return "Twenty Thousand Leagues Under the Sea"; } } I have several questions : Can I use Dependency Injection in the remote application client to connect to the ejb? If so what can i do to achieve this. Do i need to configure in the sun-ejb-jar.xml and sun-application-client.xml? In other words, if i use DI like @EJB MyEJBRemote ejb; How application client container know what ejb to be injected? Where should i specify the information? How can i run the application client? I tried to run package-appclient in the glassfish server to get appclient.jar and copy it to my computer. Then i type appclient.jar -client myAppClient.jar . It didn't work. How do i point the target server? if i cannot use DI in the client then i guess i have to use JNDI lookup. Do i need to configure jndi name in sun-ejb-jar.xml or in the sun-application-client.xml? No matter how i try i never manage to run application client ? Can you guys put some working example? And thank you for every advises and examples?

    Read the article

  • How do I relate tables with different foreign key names in Kohana ORM?

    - by Matt H
    I'm building a Kohaha application to manage sip lines in asterisk. I'm wanting to use ORM but I'm wondering how do relate certain tables that are already well established. e.g. the table sip_lines looks like this. +--------------------+------------------+------+-----+-------------------+-----------------------------+ | Field | Type | Null | Key | Default | Extra | +--------------------+------------------+------+-----+-------------------+-----------------------------+ | id | int(10) unsigned | NO | PRI | NULL | auto_increment | | sip_name | varchar(80) | NO | UNI | NULL | | | displayname | varchar(48) | NO | | NULL | | | line_num | varchar(10) | NO | MUL | NULL | | | model | varchar(12) | NO | MUL | NULL | | | mac | varchar(16) | NO | MUL | NULL | | | areacode | varchar(6) | NO | MUL | NULL | | | per_line_astpp_acc | tinyint(1) | NO | | 0 | | | play_warning | tinyint(1) | NO | | 0 | | | callout_disabled | tinyint(1) | NO | | 0 | | | notes | varchar(80) | NO | | NULL | | | last_update | timestamp | NO | | CURRENT_TIMESTAMP | on update CURRENT_TIMESTAMP | +--------------------+------------------+------+-----+-------------------+-----------------------------+ sip_buddies is this: +----------------+------------------------------+------+-----+-----------+----------------+ | Field | Type | Null | Key | Default | Extra | +----------------+------------------------------+------+-----+-----------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | name | varchar(80) | NO | UNI | | | | host | varchar(31) | NO | | | | | | | lastms | int(11) | NO | | 0 *** snip *** +----------------+------------------------------+------+-----+-----------+----------------+ The two tables are actually related as sip_lines.sip_name = sip_buddies.name How do I relate them in Kohana ORM as this wouldn't be quite right would it? <?php defined('SYSPATH') or die('No direct script access.'); /* A model for all the account information */ class Sip_Line_Model extends ORM { protected $has_one = array("sip_buddies"); } ?> EDIT: Actually, it'd be fair to say that these tables are not properly related with foreign keys! doh. EDIT: Looks like Kohana ORM is not that flexible. ORM is probably not the way to go and works best for completely new projects where the data model can be altered. The reason being that the key names must follow a specific naming convention or else they won't relate in Kohana.

    Read the article

< Previous Page | 328 329 330 331 332 333 334 335 336 337 338 339  | Next Page >