Search Results

Search found 856 results on 35 pages for 'replicate'.

Page 29/35 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • Stretch Background Image & Resize With Browser Window

    - by user241673
    I am trying to replicate the image resizing found at http://devkick.com/lab/fsgallery/ but with the code I have below, it is not working properly. When resizing the browser window to have small width and big height, white space shows up at the bottom of the page. feel free to see it & edit at http://jsbin.com/ifolu3 The CSS: html, body {width:100%; height:100%; overflow:hidden;} div.bg {position:absolute; width:200%; height:200%; top:-50%; left:-50%;} img.bg {min-height:50%; min-width:50%; margin:0 auto; display:block;} The JS/jQuery: $(window).resize(function(){ var ratio = Math.max($(window).width()/$('img.bg').width(),$(window).height()/$('img.bg').height()); if ($(window).width() $(window).height()) { $('img.bg').css({width:image.width()*ratio,height:'auto'}); } else { $('img.bg').css({width:'auto',height:image.height()*ratio}); } }); The HTML - (sorry for the formatting, had trouble getting "<" to show) [body] [div class="bg"] [img class="bg" src="bg.jpg" /] [/div] [/body]

    Read the article

  • Hibernate: deletes not cascading for self-referencing entities

    - by jwaddell
    I have the following (simplified) Hibernate entities: @Entity @Table(name = "package") public abstract class Package { protected Content content; @ManyToOne(cascade = {javax.persistence.CascadeType.ALL}) @JoinColumn(name = "content_id") @Fetch(value = FetchMode.JOIN) public Content getContent() { return content; } public void setContent(Content content) { this.content = content; } } @Entity @Table(name = "content") public class Content { private Set<Content> subContents = new HashSet<Content>(); @ManyToMany(fetch = FetchType.EAGER) @JoinTable(name = "subcontents", joinColumns = {@JoinColumn(name = "content_id")}, inverseJoinColumns = {@JoinColumn(name = "elt")}) @Cascade(value = {org.hibernate.annotations.CascadeType.DELETE, org.hibernate.annotations.CascadeType.REPLICATE}) @Fetch(value = FetchMode.SUBSELECT) public Set<Content> getSubContents() { return subContents; } public void setSubContents(Set<Content> subContents) { this.subContents = subContents; } } So a Package has a Content, and a Content is self-referencing in that it has many sub-Contents (which may contain sub-Contents of their own etc). The relationships are required to be ManyToOne (Package to Content) and ManyToMany (Content to sub-Contents) but for the case I am currently testing each sub-Content only relates to one Package or Content. The problem is that when I delete a Package and flush the session, I get a Hibernate error stating that I'm violating a foreign key constraint on table subcontents, with a particular content_id still referenced from table subcontents. I've tried specifically (recursively) deleting the Contents before deleting the Package but I get the same error. Is there a reason why this entity tree is not being deleted properly?

    Read the article

  • Emulating Visual Studio's Web Application "publish" at the command line

    - by cbp
    Hi, I am trying to automate my deployment process and am now thoroughly confused. I know that there are many questions on stackoverflow about this, but they all have different solutions and none of them work. I have a Web Application project which I usually publish by right-clicking and selecting "Publish". I get a dialog box where I use the following options: Build configuration: Release Publish method: File system Target location: C:\Deployments\MyWebsite Replace matching files with local copies I should mention that in the properties of the project I have "Items to deploy" set to "Only files needed to run this application". After running this, my entire solution is built, dependencies are resolved, build events are run, web.config transformations are applied and the website is copied to C:\Deployments\MyWebsite, although non-required files such as code-behind files are not copied. I have not been able to replicate this... in fact at this stage I'm not even sure which command line tool am I supposed to be using - msbuild, msdeploy or aspnet_compiler? This guy asks almost the same question but his solution doesn't work at all. For example, build events do not run correctly because the macros are not resolved. Whats more, the files do not get copied into the correct directory at all... I can't even begin to explain what happens!

    Read the article

  • Customers angry, fighting unknown DLL dependencies

    - by wheaties
    I'm a one man show developing a C++ Windows application for a customer. Over the past several months we've been running to the same problems with missing DLL dependencies on customer machines. Despite my best efforts something keeps going wrong and we get angry emails back. My boss and my boss's boss are angry with me and the customers aren't happy. I'm hoping you guys can help out and give suggestions/ideas on how to get the deliverables in order. Before some of the obvious: I have no test machine. That is, I can't replicate the customer environment nor attempt to install the app on a "clean" system to catch gotchas before shipping. I've tried using depends.exe to track down what versions of the DLLs my project is dependent upon. I'm shipping our code with the redistributables I've been able to find that way. After that it's an angry customer email waiting game. I'm required to use a third-party DLL which can not be registered (it's buggy as hell.) I'm not supposed to use Install Shield, any other automated installer, or write an install script. I provide written instructions on how to get the app installed (unzip, double click exe file.) I'm tired of taking heat for this stuff. What am I missing that I could be doing? What should I ask in terms of support from my employer? How should I ask for that support in a way that they'll provide it?

    Read the article

  • jquery - array problem help pls.

    - by russp
    Sorry folks, I really need help with posting an array problem. I would imaging it's quite simple, but beyond me. I have this JQuery function (using sortables) $(function() { $("#col1, #col2, #col3, #col4").sortable({ connectWith: '.column', items: '.portlet:not(.ui-state-disabled)', stop : function () { serial_1 = $('#col1').sortable('serialize'); serial_2 = $('#col2').sortable('serialize'); serial_3 = $('#col3').sortable('serialize'); serial_4 = $('#col4').sortable('serialize'); } }); }); Now I can post it to a database like this, and I can loop this ajax through all 4 "serials" $.ajax({ url: "test.php", type: "post", data: serial_1, error: function(){ alert(testit); } }); But that is not what I want to do as it creates 4 rows in the DB table. I want/need to create a single "nested array" from the 4 serials so that it enters the DB as 1 (one) row. My "base" database data looks like this: a:4:{s:4:"col1";a:3:{i:1;s:6:"forums";i:2;s:4:"chat";i:3;s:5:"blogs";}s:4:"col2";a:2:{i:1;s:5:"pages";i:2;s:7:"members";}s:4:"col3";a:2:{i:1;s:9:"galleries";i:2;s:4:"shop";}s:4:"col4";a:1:{i:1;s:4:"news";}} Therefore the JQuery array should "replicate" and create it (obviously will change on sorting) Help please thanks in advance

    Read the article

  • Full complete MySQL database replication? Ideas? What do people do?

    - by mauriciopastrana
    Currently I have two Linux servers running MySQL, one sitting on a rack right next to me under a 10 Mbit/s upload pipe (main server) and another some couple of miles away on a 3 Mbit/s upload pipe (mirror). I want to be able to replicate data on both servers continuously, but have run into several roadblocks. One of them being, under MySQL master/slave configurations, every now and then, some statements drop (!), meaning; some people logging on to the mirror URL don't see data that I know is on the main server and vice versa. Let's say this happens on a meaningful block of data once every month, so I can live with it and assume it's a "lost packet" issue (i.e., god knows, but we'll compensate). The other most important (and annoying) recurring issue is that, when for some reason we do a major upload or update (or reboot) on one end and have to sever the link, then LOAD DATA FROM MASTER doesn't work and I have to manually dump on one end and upload on the other, quite a task nowadays moving some .5 TB worth of data. Is there software for this? I know MySQL (the "corporation") offers this as a VERY expensive service (full database replication). I am just wondering what people out there do. The way it's structured, we run an automatic failover where if one server is not up, then the main URL just resolves to the other server.

    Read the article

  • Git-Based Source Control in the Enterprise: Suggested Tools and Practices?

    - by Bob Murphy
    I use git for personal projects and think it's great. It's fast, flexible, powerful, and works great for remote development. But now it's mandated at work and, frankly, we're having problems. Out of the box, git doesn't seem to work well for centralized development in a large (20+ developer) organization with developers of varying abilities and levels of git sophistication - especially compared with other source-control systems like Perforce or Subversion, which are aimed at that kind of environment. (Yes, I know, Linus never intended it for that.) But - for political reasons - we're stuck with git, even if it sucks for what we're trying to do with it. Here are some of the things we're seeing: The GUI tools aren't mature Using the command line tools, it's far to easy to screw up a merge and obliterate someone else's changes It doesn't offer per-user repository permissions beyond global read-only or read-write privileges If you have a permission to ANY part of a repository, you can do that same thing to EVERY part of the repository, so you can't do something like make a small-group tracking branch on the central server that other people can't mess with. Workflows other than "anything goes" or "benevolent dictator" are hard to encourage, let alone enforce It's not clear whether it's better to use a single big repository (which lets everybody mess with everything) or lots of per-component repositories (which make for headaches trying to synchronize versions). With multiple repositories, it's also not clear how to replicate all the sources someone else has by pulling from the central repository, or to do something like get everything as of 4:30 yesterday afternoon. However, I've heard that people are using git successfully in large development organizations. If you're in that situation - or if you generally have tools, tips and tricks for making it easier and more productive to use git in a large organization where some folks are not command line fans - I'd love to hear what you have to suggest. BTW, I've asked a version of this question already on LinkedIn, and got no real answers but lots of "gosh, I'd love to know that too!"

    Read the article

  • Odd 'UNION' behavior in an Oracle SQL query

    - by RenderIn
    Here's my query: SELECT my_view.* FROM my_view WHERE my_view.trial in (select 2 as trial_id from dual union select 3 from dual union select 4 from dual) and my_view.location like ('123-%') When I execute this query it returns results which do not conform to the my_view.location like ('123-%') condition. It's as if that condition is being ignored completely. I can even change it to my_view.location IS NULL and it returns the same results, despite that field being not-nullable. I know this query seems ridiculous with the selects from dual, but I've structured it this way to replicate a problem I have when I use a 'WITH' clause (the results of that query are where the selects from dual inline view are). I can modify the query like so and it returns the expected results: SELECT my_view.* FROM my_view WHERE my_view.trial in (2, 3, 4) and my_view.location like ('123-%') Unfortunately I do not know the trial values up front (they are queried for in a 'WITH' clause) so I cannot structure my query this way. What am I doing wrong? I will say that the my_view view is composed of 3 other views whose results are UNION ALL and each of which retrieve some data over a DB Link. Not that I believe that should matter, but in case it does.

    Read the article

  • FBPermissionDialog bug, showing "Welcome to Facebook" page

    - by Oliver
    I'm experiencing a weird bug that I can replicate pretty consistently with the FBConnect iPhone SDK, more specifically with the class FBPermissionDialog. The result is that instead of seeing the standard extended permissions dialog, the user is shown this: http://cl.ly/15Lx. The only way around it is for the user to delete the app and reinstall. This is how I have replicated it: On first login, the user is asked for extended permissions on something (the dialog displays correctly). The user declines the permission. User quits the app. The user relaunches the app and since we still need the permission, we ask again. Instead of the permission dialog, the user is shown the "Welcome to Facebook" page. The only way for the user to get asked again is to delete the app and reinstall. Has anyone else experienced this? Is there a workaround? Here is the code I use to ask for permission, I believe it's pretty standard. // Create a permission dialog FBPermissionDialog *dialog = [[[FBPermissionDialog alloc] init] autorelease]; dialog.delegate = self; dialog.permission = @"read_stream"; [dialog show];

    Read the article

  • How should we setup up complex situations for tests?

    - by ShaneC
    I'm currently working on what I would call integration tests. I want to verify that if a WCF service is called it will do what I expect. Let's take a very simple scenario. Assume we have a contract object that we can put on hold or take off hold. Now writing the put on hold test is quite simple. You create a contract instance and execute the code that puts it on code. The question I have comes when we want to test the taking off hold service call. The problem is that putting a contract on hold can be actually quite complicated leading to various objects all be modified. So usually I would use the Builder pattern and do something like this.. var onHoldContract = new ContractBuilder().PutOnHold().Build(); The problem I have with this is now I have to pretty much replicate a large part of my put on hold service. Now when I change what putting something on hold means I have two places I have to modify. The other option that immediately jumps out at me is to just use the put on hold service as part of my test setup but now I'm coupling my test to the success of another piece of code which is something I don't like to do since it can lead to failures in one spot breaking unrelated tests elsewhere (if put on hold failed for example). Any other options I'm missing out here? or opinions on which method is preferable and why?

    Read the article

  • how to read in a list of custom configuration objects

    - by Johnny
    hi, I want to implement Craig Andera's custom XML configuration handler in a slightly different scenario. What I want to be able to do is to read in a list of arbitrary length of custom objects defined as: public class TextFileInfo { public string Name { get; set; } public string TextFilePath { get; set; } public string XmlFilePath { get; set; } } I managed to replicate Craig's solution for one custom object but what if I want several? Craig's deserialization code is: public class XmlSerializerSectionHandler : IConfigurationSectionHandler { public object Create(object parent, object configContext, XmlNode section) { XPathNavigator nav = section.CreateNavigator(); string typename = (string)nav.Evaluate("string(@type)"); Type t = Type.GetType(typename); XmlSerializer ser = new XmlSerializer(t); return ser.Deserialize(new XmlNodeReader(section)); } } I think I could do this if I could get Type t = Type.GetType("System.Collections.Generic.List<TextFileInfo>") to work but it throws Could not load type 'System.Collections.Generic.List<Test1.TextFileInfo>' from assembly 'Test1, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null'.

    Read the article

  • How can I run Ruby specs and/or tests in MacVim without locking up MacVim?

    - by Henry
    About 6 months ago I switched from TextMate to MacVim for all of my development work, which primarily consists of coding in Ruby, Ruby on Rails and JavaScript. With TextMate, whenever I needed to run a spec or a test, I could just command+R on the test or spec file and another window would open and the results would be displayed with the 'pretty' format applied. If the spec or test was a lengthy one, I could just continue working with the codebase since the test/spec was running in a separate process/window. After the test ran, I could click through the results directly to the corresponding line in the spec file. Tim Pope's excellent rails.vim plugin comes very close to emulating this behavior within the MacVim environment. Running :Rake when the current buffer is a test or spec runs the file then splits the buffer to display the results. You can navigate through the results and key through to the corresponding spot in the file. The problem with the rails.vim approach is that it locks up the MacVim window while the test runs. This can be an issue with big apps that might have a lot of setup/teardown built into the tests. Also, the visual red/green html results that TextMate displays (via --format pretty, I'm assuming) is a bit easier to scan than the split window. This guy came close about 18 mos ago: http://cassiomarques.wordpress.com/2009/01/09/running-rspec-files-from-vim-showing-the-results-in-firefox/ The script he has worked with a bit of hacking, but the tests still ran within MacVim and locked up the current window. Any ideas on how to fully replicate the TextMate behavior described above in MacVim? Thanks!

    Read the article

  • MUD (game) design concept question about timed events.

    - by mudder
    I'm trying my hand at building a MUD (multiplayer interactive-fiction game) I'm in the design/conceptualizing phase and I've run into a problem that I can't come up with a solution for. I'm hoping some more experienced programmers will have some advice. Here's the problem as best I can explain it. When the player decides to perform an action he sends a command to the server. the server then processes the command, determines whether or not the action can be performed, and either does it or responds with a reason as to why it could not be done. One reason that an action might fail is that the player is busy doing something else. For instance, if a player is mid-fight and has just swung a massive broadsword, it might take 3 seconds before he can repeat this action. If the player attempts to swing again to soon, the game will respond indicating that he must wait x seconds before doing that. Now, this I can probably design without much trouble. The problem I'm having is how I can replicate this behavior from AI creatures. All of the events that are being performed by the server ON ITS OWN, aka not as an immediate reaction to something a player has done, will have to be time sensitive. Some evil monster has cast a spell on you but must wait 30 seconds before doing it again... I think I'll probably be adding all these events to some kind of event queue, but how can I make that event queue time sensitive?

    Read the article

  • Differences between Assembly Code output of the same program

    - by ultrajohn
    I have been trying to replicate the buffer overflow example3 from this article aleph one I'm doing this as a practice for a project in a computer security course i'm taking so please, I badly need your help. I've been the following the example, performing the tasks as I go along. My problem is the assembly code dumped by gdb in my computer (i'm doing this on a debian linux image running on VM Ware) is different from that of the example in the article. There are some constructs which I find confusing. Here is the one from my computer: here is the one from the article... Dump of assembler code for function main: 0x8000490 <main>: pushl %ebp 0x8000491 <main+1>: movl %esp,%ebp 0x8000493 <main+3>: subl $0x4,%esp 0x8000496 <main+6>: movl $0x0,0xfffffffc(%ebp) 0x800049d <main+13>: pushl $0x3 0x800049f <main+15>: pushl $0x2 0x80004a1 <main+17>: pushl $0x1 0x80004a3 <main+19>: call 0x8000470 <function> 0x80004a8 <main+24>: addl $0xc,%esp 0x80004ab <main+27>: movl $0x1,0xfffffffc(%ebp) 0x80004b2 <main+34>: movl 0xfffffffc(%ebp),%eax 0x80004b5 <main+37>: pushl %eax 0x80004b6 <main+38>: pushl $0x80004f8 0x80004bb <main+43>: call 0x8000378 <printf> 0x80004c0 <main+48>: addl $0x8,%esp 0x80004c3 <main+51>: movl %ebp,%esp 0x80004c5 <main+53>: popl %ebp 0x80004c6 <main+54>: ret 0x80004c7 <main+55>: nop As you can see, there are differences between the two. I'm confuse and I can't understand totally the assembly code from my computer. I would like to know the differences between the two. How is pushl different from push, mov vs movl , and so on... what does the expression 0xhexavalue(%register) means? I am sorry If I'm asking a lot, But I badly need your help. Thanks for the help really...

    Read the article

  • How can I increment a counter every N loops in JMeter?

    - by Dave Hunt
    I want to test concurrency, and reliably replicate an issue that JMeter brought to my attention. What I want to do is set a unique identifier (currently the time in milliseconds with a counter appended) and increment the counter between loops but not between threads. The idea being that the number of threads I have set up is the number of identical identifiers before incrementing and using another. If I had 3 threads with a loop count of 2 I want: 1. Unique ID: <current-time-in-millis>000000 2. Unique ID: <current-time-in-millis>000000 3. Unique ID: <current-time-in-millis>000000 4. Unique ID: <current-time-in-millis>000001 5. Unique ID: <current-time-in-millis>000001 6. Unique ID: <current-time-in-millis>000001 I've tried using Throughput Controllers to increment a counter, as well as several other things that seemed they should work but had no luck. This seems like something JMeter should be able to do. Is there any way to get the value of the loop count?

    Read the article

  • RectangleGeometry with relative dimensions... how?

    - by Padu Merloti
    I'm trying to replicate the nowadays so fashionable "reflex" effect on a controltemplate for buttons I'm creating. The basic idea is to create a rectangle with a gradient fill from white to transparent and then clip some of that semi-transparent rectangle with a rectanglegeometry. The problem is that I don't know how to define a relative rectangle geometry. I kind of worked around width by defining a large value (1000), but height is a problem. For example, it works good for buttons that have a 200 height, but doesn't do anything for smaller buttons. Any ideas? <Rectangle RadiusX="5" RadiusY="5" StrokeThickness="1" Stroke="Transparent"> <Rectangle.Fill> <LinearGradientBrush StartPoint="0,0" EndPoint="0,0.55"> <GradientStop Color="#66ffffff" Offset="0.0" /> <GradientStop Color="Transparent" Offset="1.0" /> </LinearGradientBrush> </Rectangle.Fill> <Rectangle.Clip> <RectangleGeometry Rect="0,0,1000,60" /> </Rectangle.Clip> </Rectangle>

    Read the article

  • PHP weirdness extending IMagick class

    - by Jamie Carl
    This is a really weird one. I have some code that is happily working on version 2.1.1RC1 of the php5-imagick module. It's basically just a class I wrote that extends the Imagick class and manages images stored in a database. Since upgrading to version 3.0.0RC1 (thankfully only on my dev box) things have gone to hell. It seems that object members are writeable but are NOT readable. Take the following sample code: class db_image extends IMagick { private $data; function __construct( $id = null ){ parent::__construct(); $this->data = 'some plain text'; echo $this->data; } This will output absolutely NOTHING. My debugger indicates that the contents of $this-data are the correct string value, but I am unable to read the value back out of the member variable. Seriously. WTF? Does anyone know what is causing this or has seen it before? I don't even know how to replicate this behaviour in my own classes.

    Read the article

  • Synchronizing in SQL Replication works when manually syncing, but not automatically

    - by Dominic Zukiewicz
    I'm using SQL Server 2005 to create a replication copy of the main databases, so that the reports can point to the replication copy instead of locking out our main databases. I have set up the 3 databases as publications and then 3 subscribers moving the transactions over to the subscribers, instantaneously I hope! What seems to be happening is that when using the "Insert Tracer" function, replication take publisher to distributor < 2 seconds, but to replicate to the subscribers can take over 7 minutes (and these are local databases on a SAN). This could be for 2 reasons: The SQL statements used to query the database are obtaining locks which are stopping the transactions updating the subscribers. The subscribers are just too busy for the replication to apply the changes. What seems to trouble me more, is that although the Replication Monitor / Insert Tracer are showing these statistics, if you use the "View Subscription Details" and then click Start, it will sync within seconds. My goal would be to have the data syncing (ideally) continuously, or every minute, perhaps I should reduce the batch size of the transactions? What am I doing wrong? [Note that the -Continuous flag is set!]

    Read the article

  • MSSQL: Views that use SELECT * need to be recreated if the underlying table changes

    - by cbp
    Is there a way to make views that use SELECT * stay in sync with the underlying table. What I have discovered is that if changes are made to the underlying table, from which all columns are to be selected, the view needs to be 'recreated'. This can be achieved simly by running an ALTER VIEW statement. However this can lead to some pretty dangerous situations. If you forgot to recreate the view, it will not be returning the correct data. In fact it can be returning seriously messed up data - with the names of the columns all wrong and out of order. Nothing will pick up that the view is wrong unless you happened to have it covered by a test, or a data integrity check fails. For example, Red Gate SQL Compare doesn't pick up the fact that the view needs to be recreated. To replicate the problem, try these statements: CREATE TABLE Foobar (Bar varchar(20)) CREATE VIEW v_Foobar AS SELECT * FROM Foobar INSERT INTO Foobar (Bar) VALUES ('Hi there') SELECT * FROM v_Foobar ALTER TABLE Foobar ADD Baz varchar(20) SELECT * FROM v_Foobar DROP VIEW v_Foobar DROP TABLE Foobar I am tempted to stop using SELECT * in views, which will be a PITA. Is there a setting somewhere perhaps that could fix this behaviour?

    Read the article

  • Is there some formal way to update the browser detection files for ASP.Net?

    - by Deane
    I have an ASP.Net site on which we're using control adapters. We have the adapters mapped to a "refID" of "Default." These adapters are working fine on all browsers except Chrome and Safari. For those browsers, they do not execute. I've given up trying to figure out why -- I have a question here on SO that no one has been able to answer, and I've been researching it for days now. It's just inexplicable. I have tested the same code in my local environment, and it works just fine. Additionally, no one else can replicate my problem on other servers. It seems to be somehow confined to the machines at my client's site. Could they be somehow out-of-date? If this is the case, is there some way to "update" the .browser files? I'm half-tempted to just copy the .browser files out of the Framework directory from my machine over to theirs, but I'm curious is there's something more formal than this? Is there some other source of data that ASP.Net uses for browser detection other than these files?

    Read the article

  • Harvesting Dynamic HTTP Content to produce Replicating HTTP Static Content

    - by Neil Pitman
    I have a slowly evolving dynamic website served from J2EE. The response time and load capacity of the server are inadequate for client needs. Moreover, ad hoc requests can unexpectedly affect other services running on the same application server/database. I know the reasons and can't address them in the short term. I understand HTTP caching hints (expiry, etags....) and for the purpose of this question, please assume that I have maxed out the opportunities to reduce load. I am thinking of doing a brute force traversal of all URLs in the system to prime a cache and then copying the cache contents to geodispersed cache servers near the clients. I'm thinking of Squid or Apache HTTPD mod_disk_cache. I want to prime one copy and (manually) replicate the cache contents. I don't need a federation or intelligence amongst the slaves. When the data changes, invalidating the cache, I will refresh my master cache and update the slave versions, probably once a night. Has anyone done this? Is it a good idea? Are there other technologies that I should investigate? I can program this, but I would prefer a configuration of open source technologies solution Thanks

    Read the article

  • Advice needed: cold backup for SQL Server 2008 Express?

    - by Mikey Cee
    What are my options for achieving a cold backup server for SQL Server Express instance running a single database? I have an SQL Server 2008 Express instance in production that currently represents a single point of failure for my application. I have a second physical box sitting at the installation that is currently doing nothing. I want to somehow replicate my database in near real time (a little bit of data loss is acceptable) to the second box. The database is very small and resources are utilized very lightly. In the case that the production server dies, I would manually reconfigure my application to point to the backup server instead. Although Express doesn't support log shipping, I am thinking that I could manually script a poor man's version of it, where I use batch files to take the logs and copy them across the network and apply them to the second server at 5 minute intervals. Does anyone have any advice on whether this is technically achievable, or if there is a better way to do what I am trying to do? Note that I want to avoid having to pay for the full version of SQL Server and configure mirroring as I think it is an overkill for this application. I understand that other DB platforms may present suitable options (eg. a MySQL Cluster), but for the purposes of this discussion, let's assume we have to stick to SQL Server.

    Read the article

  • IntelliJ Doesn't Notice Changes in Interface

    - by yar
    [I've decided to give IntelliJ another go (to replace Eclipse), since its Groovy support is supposed to be the best. But back to Java...] I have an Interface that defines a constant public static final int CHANNEL_IN = 1; and about 20 classes in my Module that implement that interface. I've decided that this constant was a bad idea so I did what I do in Eclipse: I deleted the entire line. This should cause the Project tree to light up like a Christmas tree and all classes that implement that interface and use that constant to break. Instead, this is not happening. If I don't actually double-click on the relevant classes -- which I find using grep -- the module even builds correctly (using Build - Make Module). If I double-click on a relevant class, the error is shown both in the Project Tree and in the Editor. I am not able to replicate this behavior in small tests, but in large modules it works (incorrectly) this way. Is there some relevant setting in IntelliJ for this?

    Read the article

  • I have created a PHP script and I am lacking to extract the primary key, I have given flow below, pl

    - by Parth
    I am using MySQL DB, working for Joomla, My requirement is tracking the activity like insert/update/delete on any table and store it in another audit table using triggers, i.e. I am doing Auditing. DB's table structure: Few tables dont have any PK nor auto increment key Flow of my script is : I fetch out all table from DB. I check whether the table have any trigger or not. If yes then it moves to check nfor next table and so on. If it does'nt find any trigger then it creates the triggers for the table, such that, -it first checks if the table has any primary key or not(for inserting in Tracking audit table for every change made) if it has the primary key then it uses it further in creation of trigger. if it doesnt find any PK then it proceeds further in creating the trigger without inserting any id in audit table Now here, My problem is I need the PK every time so that I can record the id of any particular table in which the insert/update/delete is performed, so that further i can use this audit track table to replicate in production DB.. Now as I haave mentioned earlier that I am not available with PK/auto-incremented in some table, then what should I do get the particular id in which change is done? please guide me...GEEKS!!!

    Read the article

  • When is the reintegrate option really necessary?

    - by Tor Hovland
    If you always sync a feature branch before you merge it back, why do you really have to use the --reintegrate option? The Subversion book says: When merging your branch back to the trunk, however, the underlying mathematics is quite different. Your feature branch is now a mishmosh of both duplicated trunk changes and private branch changes, so there's no simple contiguous range of revisions to copy over. By specifying the --reintegrate option, you're asking Subversion to carefully replicate only those changes unique to your branch. (And in fact, it does this by comparing the latest trunk tree with the latest branch tree: the resulting difference is exactly your branch changes!) So the --reintegrate option only merges the changes that are unique to the feature branch. But if you always sync before merge (which is a recommended practice, in order to deal with any conflicts on the feature branch), then the only changes between the branches are the changes that are unique to the feature branch, right? And if Subversion tries to merge code that is already on the target branch, it will just do nothing, right? In this blog post, Mark Phippard writes: http://blogs.open.collab.net/svn/2008/07/subversion-merg.html If we include those synched revisions, then we merge back changes that already exist in trunk. This yields unnecessary and confusing conflicts. Can somebody give me an example of when dropping reintegrate gives me unnecessary conflicts?

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35  | Next Page >