Search Results

Search found 5689 results on 228 pages for 'michael sync'.

Page 150/228 | < Previous Page | 146 147 148 149 150 151 152 153 154 155 156 157  | Next Page >

  • Things I've noticed with DVCS

    - by Wes McClure
    Things I encourage: Frequent local commits This way you don't have to be bothered by changes others are making to the central repository while working on a handful of related tasks.  It's a good idea to try to work on one task at a time and commit all changes at partitioned stopping points.  A local commit doesn't have to build, just FYI, so a stopping point doesn't mean a build point nor a point that you can push centrally.  There should be several of these in any given day.  2 hours is a good indicator that you might not be leveraging the power of frequent local commits.  Once you have verified a set of changes works, save them away, otherwise run the risk of introducing bugs into it when working on the next task.  The notion of a task By task I mean a related set of changes that can be completed in a few hours or less.  In the same token don’t make your tasks so small that critically related changes aren’t grouped together.  Use your intuition and the rest of these principles and I think you will find what is comfortable for you. Partial commits Sometimes one task explodes or unknowingly encompasses other tasks, at this point, try to get to a stopping point on part of the work you are doing and commit it so you can get that out of the way to focus on the remainder.  This will often entail committing part of the work and continuing on the rest. Outstanding changes as a guide If you don't commit often it might mean you are not leveraging your version control history to help guide your work.  It's a great way to see what has changed and might be causing problems.  The longer you wait, the more that has changed and the harder it is to test/debug what your changes are doing! This is a reason why I am so picky about my VCS tools on the client side and why I talk a lot about the quality of a diff tool and the ability to integrate that with a simple view of everything that has changed.  This is why I love using TortoiseHg and SmartGit: they show changed files, a diff (or two way diff with SmartGit) of the current selected file and a commit message all in one window that I keep maximized on one monitor at all times. Throw away / stash commits There is extreme value in being able to throw away a commit (or stash it) that is getting out of hand.  If you do not commit often you will have to isolate the work you want to commit from the work you want to throw away, which is wasted productivity and highly prone to errors.  I find myself doing this about once a week, especially when doing exploratory re-factoring.  It's much easier if I can just revert all outstanding changes. Sync with the central repository daily The rest of us depend on your changes.  Don't let them sit on your computer longer than they have to.  Waiting increases the chances of merge conflict which just decreases productivity.  It also prohibits us from doing deploys when people say they are done but have not merged centrally.  This should be done daily!  Find a way to partition the work you are doing so that you can sync at least once daily. Things I discourage: Lots of partial commits right at the end of a series of changes If you notice lots of partial commits at the end of a set of changes, it's likely because you weren't frequently committing, nor were you watching for the size of the task expanding beyond a single commit.  Chances are this cost you productivity if you use your outstanding changes as a guide, since you would have an ever growing list of changes. Committing single files Committing single files means you waited too long and no longer understand all the changes involved.  It may mean there were overlapping changes in single files that cannot be isolated.  In either case, go back to the suggestions above to avoid this.  Committing frequently does not mean committing frequently right at the end of a day's work. It should be spaced out over the course of several tasks, not all at the end in a 5 minute window.

    Read the article

  • Sunshine after the iCloud release?

    - by Laila
    "Why should I believe them? They're the ones that brought us MobileMe? It was not our finest hour, but we learned a lot." Steve Jobs June 6th 2011 Apple's new cloud service has been met with uncritical excitement by industry commentators.  It is wonderful what a rename can do.  Apple has had a 'cloud' offering for three years called MobileMe, successor to .MAC and  iTools, so iCloud is now the fourth internet service Apple have attempted. If this had been Microsoft, there would have been catcalls all around the blogosphere.  I'll admit that there is a lot more functionality announced for iCloud than MobileMe has ever managed to achieve, but then almost anything has more functionality than MobileMe.  It's an expensive service (£120 a year in the UK, $90 in the states), launched as far back as  June 9, 2008, that has delivered very little and suffered a string of technical problems; the documentation was mainly  a community effort, built up gradually by the frustrated and angry users. It was supposed to synchronise PC Outlook calendars but couldn't manage Microsoft Exchange (Google could, of course). It used WebDAV to allow Windows users to attach to the filestore, but didn't document how to do it. The method for downloading and uploading files to the cloud-based filestore was ridiculously clunky. It allowed you to post photos on a public site, but forgot to include a way of deleting photos. I could go on with the list, but you can explore the many sites that have flourished to inhabit the support-vacuum left by Apple. MobileMe should have had all the bright new clever things announced for iCloud. Apple dropped the ball, and allowed services such as Flickr to fill the void. However, their PR skills are such that, a name-change later (the .ME.com email address remains), it has turned a rout into a victory, and hundreds of earnest bloggers have been extolling Apple's expertise in cloud matters. This must be frustrating for the other cloud providers who have quietly got the technology working right. I wish iCloud well, even though I resent the expensive mess they made of MobileMe. Apple promise that iCloud will sync files, apps, app data, and media across all the different iOS5 devices, Macs, and PCs. It also hopes to sync music across devices, but not video content. They've offered existing MobileMe users free use of the MobileMe service for a year as the product is morphed, and they will be able to transfer to iCloud when it is launched in the autumn.  On June 30, 2012, MobileMe will die, and Apple's iWeb is also soon to join iTools and .MAC in the hereafter. So why get excited about iCloud? That all depends on the level of PC integration. Whereas iOS5 machines will be full participants in the new world of data-sharing (Sorry iPod Touch users) what about .NET libraries? There is talk of synchronising 'My Pictures' libraries with iOS5 and iMac machines, but little more detail as yet. Apple has a lot to prove with iCloud and anyone with actual experience of their past attempts to get into cloud services will be wary.

    Read the article

  • Webcast Q&A: ResCare Solves Content Lifecycle Challenges with Oracle WebCenter

    - by Kellsey Ruppel
    Last week we had the fourth webcast in our WebCenter in Action webcast series, "ResCare Solves Content Lifecycle Challenges with Oracle WebCenter", where customer Joe Lichtefeld from ResCare and Wayne Boerger & Doug Thompson from Oracle Partner TEAM Informatics shared how Oracle WebCenter is powering allowing ResCare to solve content lifecycle challenges, reduce compliance and business risks, and increase adoption of intranet as primary business communication tool In case you missed it, here's a recap of the Q&A.   Joe Lichtefeld, ResCare  Q: Did you run into any issues in the deployment of the platform?A: We experienced very few issues when implementing the content management and search functionalities. There were some challenges in determining the metadata structure. We tried to find a fine balance between having enough fields to provide the functionality needed, but trying to limit the impact to the contributing members.  Q: What has been the biggest benefit your end users have seen?A: The biggest benefit to date is two-fold. Content on the intranet can be maintained by the individual contributors more timely than in our old process of all requests being updated by IT. The other big benefit is the ability to find the most current version of a document instead of relying on emails and phone calls to track down the "current" version. Q: Was there any resistance internally when implementing the solution? If so, how did you overcome that?A: We experienced very little resistance. Most of our community groups were eager to be able to contribute and maintain their information. We had the normal hurdles of training and follow-up training with implementing a new system and process. As our second phase rolled out access to all employees, we have received more positive feedback on the accessibility of information. Wayne Boerger & Doug Thompson, TEAM Informatics Q: Can you integrate multiple repositories with the Google Search Appliance? Yes, the Google Search Appliance is designed to index lots of different repositories, from both public and internal sources. There are included connectors to many repositories, such as SharePoint, databases, file systems, LDAP, and with the TEAM GSA Connector and the Oracle Content Server. And the index for these repositories can be configured into different collections depending on the use cases that each customer has, and really, for each need within a customer environment. Q: How many different filters can you add when the search results are returned? A: Presuming this question is about the filtering on the search results. You can add as many filters as you like and it can be done by collection or any number of other criteria. Most importantly, customers now have the ability to limit the returned content by a set metadata value. Q: With the TEAM Sites Connector, what types of content can you sync? A: There’s really no limit; if it can be checked into the content server, then it is eligible for sync into Sites.  So basically, any digital file that has relevance to a Sites implementation can be checked into the WC Content central repository and then the connector can/will manage it. Q: Using the Connector, are there any limitations around where in Sites that synced content can be used? A: There are no limitations about where it can be used. When setting up your environment to use it, you just need to think through the different destinations on the Sites side that might use the content; that way you’ve got the right information to create the rules needed for the connector. If you missed the webcast, be sure to catch the replay to see a live demonstration of WebCenter in action!  ResCare Solves Content Lifecycle Challenges with Oracle WebCenter from Oracle WebCenter

    Read the article

  • SO-Aware Service Explorer – Configure and Export your services from VS 2010 into the repository

    - by cibrax
    We have introduced a new Visual Studio tool called “Service Explorer” as part of the new SO-Aware SDK version 1.3 to help developers to configure and export any regular WCF service into the SO-Aware service repository. This new tool is a regular Visual Studio Tool Window that can be opened from “View –> Other Windows –> Services Explorer”. Once you open the Services Explorer, you will able to see all the available WCF services in the Visual Studio Solution. In the image above, you can see that a “HelloWorld” service was found in the solution and listed under the Tool window on the left. There are two things you can do for a new service in tool, you can either export it to SO-Aware repository or associate it to an existing service version in the repository. Exporting the service to SO-Aware means that you want to create a new service version in the repository and associate the WCF service WSDL to that version. Associating the service means that you want to use a version already created in SO-Aware with the only purpose of managing and centralizing the service configuration in SO-Aware. The option for exporting a service will popup a dialog like the one bellow in which you can enter some basic information about the service version you want to create and the repository location. The option for associating a service will popup a dialog in which you can pick any existing service version repository and the application configuration file that you want to keep in sync for the service configuration. Two options are available for configuring a service, WCF Configuration or SO-Aware. The WCF Configuration option just tells the tool that the service will use the standard WCF configuration section “system.serviceModel” but that section must be updated and kept in sync with the configuration selected for the service in the repository. The SO-Aware configuration option will tell the tool that the service configuration will be resolved at runtime from the repository. For example, selecting SO-Aware will generate the following configuration in the selected application configuration file, <configuration> <configSections> <section name="serviceRepository" type="Tellago.ServiceModel.Governance.ServiceConfiguration.ServiceRepositoryConfigurationSection, Tellago.ServiceModel.Governance.ServiceConfiguration" /> </configSections> <serviceRepository url="http://localhost/soaware/servicerepository.svc"> <services> <service name="ref:HelloWorldService(1.0)@dev" type="SOAwareSampleService.HelloWorldService" /> </services> </serviceRepository> </configuration> As you can see the tool represents a great addition to the toolset that any developer can use to manage and centralize configuration for WCF services. In addition, it can be combined with other useful tools like WSCF.Blue (Web Service Contract First) for generating the service artifacts like schemas, service code or the service WSDL itself.

    Read the article

  • Move over DFS and Robocopy, here is SyncToy!

    - by andywe
    Ever since Windows 2000, I have always had the need to replicate data to multiple endpoints with the same content. Until DFS was introduced, the method of thinking was to either manually copy the data location by location, or to batch script it with xcopy and schedule a task. Even though this worked (and still does today), it was cumbersome, and intensive on the network, especially when dealing with larger amounts of data. Then along came robocopy, as an internal tool written by an enterprising programmer at Microsoft. We used it quite a bit, especially when we could not use DFS in the early days. It was received so well, it made it into the public realm. At least now we could have the ability to determine what files had changed and only replicate those. Well, over time there has been evolution of this ideal. DFS is obviously the Windows enterprise class service to do this, along with BrancheCache..however you don’t always need or want the power of DFS, especially when it comes to small datacenter installations, or remote offices. I have specific data sets that are on closed or restricted networks, that either have a security need for this, or are in remote countries where bandwidth is a premium. FOr this, I use the latest evolution for one off replication names Synctoy. Synctoy is from Microsoft, seemingly released in 2009, that wraps a nice GUI around setting up a paired set of folders (remember the mobile briefcase from Windows 98?), and allowing you the choice of synchronization methods. 1 way, or 2 way. Simply create a paired set of folders on the source and destination, choose your options for content, exclude any file types you don’t want to replicate, and click run. Scheduling is even easier. MS has included a wrapper for doing just this so all you enter in your task schedule in the SynToyCMD.exe, a –R as an argument, and the time schedule. No more complicated command lines or scripts.   I find this especially useful when I use MS backup to back up a system volume, but only want subsets of backup information of a data share and ONLY when that dataset has changed. Not relying on full backups and incremental. An example of this is my application installation master share. I back this up with SyncToy because I do not need multiple backup copies..one copy elsewhere suffices to back it up. At home, very useful for your pictures, videos, music, ect..the backup is online and ready to access, not waiting for you to restore a backup file, and no need to institute a domain simply to have DFS.'   Do note there is a risk..if you accidently delete a file and do not catch this before the next sync, then depending on your SyncToy settings, you can indeed lose that file as the destination updates..so due diligence applies. I make it a rule to sync manly one way…I use my master share for making changes, and allow the schedule to follow suit. Any real important file I lock down as read only through file permissions so it cannot be deleted unless I intervene.   Check out the tool and have some fun! http://www.microsoft.com/en-us/download/details.aspx?DisplayLang=en&id=15155

    Read the article

  • MVP, WinForms - how to avoid bloated view, presenter and presentation model

    - by MatteS
    When implementing MVP pattern in winforms I often find bloated view interfaces with too many properties, setters and getters. An easy example with be a view with 3 buttons and 7 textboxes, all having value, enabled and visible properties exposed from the view. Adding validation results for this, and you could easily end up with an interface with 40ish properties. Using the Presentation Model, there'll be a model with the same number of properties aswell. How do you easily sync the view and the presentation model without having bloated presenter logic that pass all the values back and forth? (With that 80ish line presenter code, imagine with the presenter test that mocks the model and view will look like..160ish lines of code just to mock that transfer.) Is there any framework to handle this without resorting to winforms databinding? (you might want to use different views than a winforms view. According to some, this sync should be the presenters job..) Would you use AutoMapper? Maybe im asking the wrong questions, but it seems to me MVP easily gets bloated without some good solution here..

    Read the article

  • How do I use VS2010 One-Click Publish (MsDeploy) to deploy remotely from the command line?

    - by David
    On the remote web server I have installed the remote service http://x.x.x.x/MsDeployAgentService. If I use the Web Application Project's Publish command in VS2010 I can successfully publish to this remote web server and update a specific IIS website. What I want to do now is execute this capability from the command line. I am guessing it is two steps. First build the web application project using the relevant build configuration: msbuild "C:\MyApplication\MyWebApplication.csproj" /T:Package /P:Configuration=Release Then issue the MsDeploy command to have it publish/sync with the remove IIS server: msdeploy -verb:sync -source:package="C:\MyApplication\obj\Release\Package\MyWebApplication.zip" -dest:contentPath="My Production Website", computerName=http://x.x.x.x/MsDeployAgentService, username=adminuser,password=adminpassword Unfortunately I get an the error: Error: (10/05/2010 3:52:02 PM) An error occurred when the request was processed on the remote computer. Error: Source (sitemanifest) and destination (contentPath) are not compatible for the given operation. Error count: 1. I have tried a number of different combinations for destination provider but no joy :( Has anyone managed to replicate VS2010 Web Application Project "One Click" Publish from the command line?

    Read the article

  • git-p4 submit fails with "Not a valid object name HEAD~261"

    - by Harlan
    I've got a git repository that I'd like to mirror to a Perforce repository. I've downloaded the git-p4 script (the more recent version that doesn't give deprecation warnings), and have been working with that. I've figured out how to pull changes from Perforce, but I'm getting an error when I try to sync changes from the git repo back. Here's what I've done so far: git clone [email protected]:asdf/qwerty.git git-p4 sync //depot/path/to/querty git merge remotes/p4/master (there was a single README file...) So, I've copied the origin to a clean, new director, got a lovely looking merged tree of files, and git status shows I'm up-to-date. But: > git-p4 submit fatal: Not a valid object name HEAD~261 Command failed: git cat-file commit HEAD~261 This thread on the git mailing list seems to be relevant, but I can't figure out what they're doing with all the A, B, and Cs. Could someone please clarify what "Not a valid object name" means, and what I can do to fix the problem? All I want to do is to periodically snapshot the origin/master into Perforce; a full history is not required. Thanks.

    Read the article

  • Problem reading from two separate InputStreams

    - by Emil H
    I'm building a Yammer client for Android in Scala and have encountered the following issue. When two AsyncTasks try to parse an XML response (not the same, each task has it's own InputStream) from the Yammer API the underlying stream throws a IOException with the message "null SSL pointer", as seen below: Uncaught handler: thread AsyncTask #1 exiting due to uncaught exception java.lang.RuntimeException: An error occured while executing doInBackground() at android.os.AsyncTask$3.done(AsyncTask.java:200) at java.util.concurrent.FutureTask$Sync.innerSetException(FutureTask.java:234) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:258) at java.util.concurrent.FutureTask.run(FutureTask.java:122) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:648) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:673) at java.lang.Thread.run(Thread.java:1060) Caused by: java.io.IOException: null SSL pointer at org.apache.harmony.xnet.provider.jsse.OpenSSLSocketImpl.nativeread(Native Method) at org.apache.harmony.xnet.provider.jsse.OpenSSLSocketImpl.access$300(OpenSSLSocketImpl.java:55) at org.apache.harmony.xnet.provider.jsse.OpenSSLSocketImpl$SSLInputStream.read(OpenSSLSocketImpl.java:524) at org.apache.http.impl.io.AbstractSessionInputBuffer.fillBuffer(AbstractSessionInputBuffer.java:103) at org.apache.http.impl.io.AbstractSessionInputBuffer.read(AbstractSessionInputBuffer.java:134) at org.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:174) at org.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:188) at org.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:178) at org.apache.harmony.xml.ExpatParser.parseFragment(ExpatParser.java:504) at org.apache.harmony.xml.ExpatParser.parseDocument(ExpatParser.java:467) at org.apache.harmony.xml.ExpatReader.parse(ExpatReader.java:329) at org.apache.harmony.xml.ExpatReader.parse(ExpatReader.java:286) at javax.xml.parsers.SAXParser.parse(SAXParser.java:361) at org.mediocre.util.XMLParser$.loadXML(XMLParser.scala:28) at org.mediocre.util.XMLParser$.loadXML(XMLParser.scala:12) ..... Searching for the error didn't give much clarity. Does this have something to do with the response from the server? Or is it something else? Complete code can be found at: http://github.com/archevel/YammerTime I get no error if I wait until the first repsponse is finished and then let the other complete. The request is made with the DefaultHttpClient, but this is supposedly thread safe. What am I missing? If anything needs to be clarified just ask :) Cheers, Emil H

    Read the article

  • Android - How to Use SQLiteDatabase.open?

    - by Edwin Lee
    Hi all, i'm trying to use SQLiteDatabase.openDatabase( "/data/data/edwin11.myapp/databases/myapp.db", null, (SQLiteDatabase.CREATE_IF_NECESSARY | SQLiteDatabase.NO_LOCALIZED_COLLATORS)); to create/open a database instead of making use of the SQLiteOpenHelper (because i want to pass in the flag SQLiteDatabase.NO_LOCALIZED_COLLATORS. However, i am getting this exception for that line of code: 04-18 09:50:03.585: ERROR/Database(3471): sqlite3_open_v2("/data/data/edwin11.myapp/databases/myapp.db", &handle, 6, NULL) failed 04-18 09:50:03.665: ERROR/AndroidRuntime(3471): java.lang.RuntimeException: An error occured while executing doInBackground() 04-18 09:50:03.665: ERROR/AndroidRuntime(3471): at android.os.AsyncTask$3.done(AsyncTask.java:200) 04-18 09:50:03.665: ERROR/AndroidRuntime(3471): at java.util.concurrent.FutureTask$Sync.innerSetException(FutureTask.java:234) 04-18 09:50:03.665: ERROR/AndroidRuntime(3471): at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:258) 04-18 09:50:03.665: ERROR/AndroidRuntime(3471): at java.util.concurrent.FutureTask.run(FutureTask.java:122) 04-18 09:50:03.665: ERROR/AndroidRuntime(3471): at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:648) 04-18 09:50:03.665: ERROR/AndroidRuntime(3471): at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:673) 04-18 09:50:03.665: ERROR/AndroidRuntime(3471): at java.lang.Thread.run(Thread.java:1060) 04-18 09:50:03.665: ERROR/AndroidRuntime(3471): Caused by: android.database.sqlite.SQLiteException: unable to open database file 04-18 09:50:03.665: ERROR/AndroidRuntime(3471): at android.database.sqlite.SQLiteDatabase.dbopen(Native Method) 04-18 09:50:03.665: ERROR/AndroidRuntime(3471): at android.database.sqlite.SQLiteDatabase.<init>(SQLiteDatabase.java:1584) 04-18 09:50:03.665: ERROR/AndroidRuntime(3471): at android.database.sqlite.SQLiteDatabase.openDatabase(SQLiteDatabase.java:638) ... Doing some testing just before that line of code (using File.isExists) shows that the file /data/data/edwin11.myapp/databases/myapp.db does not exist. Would that be the cause of the error? (Or am i just using SQLiteDatabase.openDatabase the wrong way?) Would it help if i create the file beforehand? (Shouldn't that be taken care of by the SQLiteDatabase.CREATE_IF_NECESSARY flag that i passed in?) If creating the file manually is the way to go, is it just an empty file, or do i have to write something to it? Thanks and Regards.

    Read the article

  • PHP Site Deployment Suggestion

    - by TheOnly92
    I'm currently quite troubled by the way of deployment my team is adopting... It's very old-fashioned and I know it doesn't work very well. But I don't exactly know how to change it, so please give some suggestions about it... Here is our current setup: 2 webservers 1 database server 1 test server Current deployment adaptation 1. We develop and work on the test server, every changes is uploaded manually to the test server. 2. When a change or feature is complete, we then commit the changes to SVN repository. 3. After committing the changes, we then upload our changes to the first webserver, where there will be a cronjob running every minute to sync the files between the servers. Something very annoying is, whenever we upload a file just as the syncing job starts, the file that is sync-ed will appear corrupted, since it is only half-uploaded. Another thing is whenever there is a deployment fault, it will be extremely difficult to revert. These are basically the problem I'm facing, what should I do?

    Read the article

  • Eclipse does not refresh project files in package explorer view

    - by EugeneP
    Today I see a strange behaviour of Eclipse 3.5.2 for the first time in 3 months. First, when I run a main function, it runs a previously compiled version. Let's say I press Ctrl+F11 in the window with an open java class and existing main function. Usually it rebuilds the class and runs a new version. Today even if there was a compile mistake, it would run fine. So I guess it does not recompile the class. Next, more strangely, if I intentionally make a mistake in the code and Eclipse underlines those lines in red, still the project Explorer does not mark them as containing errors. They remain of grey color if there were not any errors. First I did not know how to solve this problem. I tried to reopen the project, restart Eclipse and finally reboot the OS. After the tenth attempt, after rebooting, Eclipse said that all project's files are "OUT OF SYNC with the file system". When I pressed "Refresh" - F5 on a project's header name in Project Explorer it finally marked all the files with errors as containing errors and running the main function gave the desired result. An hour of my work passed and this happened again , with the other project. All the same. No marking of files as red, running no matter what old version of class with no compile errors. And since Eclipse does not tell that files are out of sync, simply pressing F5 on a project cannot help. What can you suggest?

    Read the article

  • Why is "rake tests" running an empty suite when I use shoulda?

    - by ryeguy
    So here is my test suite: class ReleaseTest < ActiveSupport::TestCase should_not_allow_values_for :title, '', 'blah', 'blah blah' should_allow_values_for :title, 'blah - bleh', 'blah blah - bleh bleh' def test_something assert true end end Shoulda's macros generate 5 tests, and then I have test_something below (just to see if that would matter), totalling 6 tests. They all pass as you can see below, but then it runs a 0-test suite. This happens even if I completely empty out ReleaseTest. This problem only exists if I have config.gem 'shoulda' in my environment.rb. If I explicitly do require 'shoulda' at the top of my tests, everything works fine. What would be causing this? /usr/bin/ruby -e STDOUT.sync=true;STDERR.sync=true;load($0=ARGV.shift) /var/lib/gems/1.9.1/bin/rake test Testing started at 6:58 PM ... (in /home/rlepidi/projects/rails/testproject) /usr/bin/ruby1.9.1 -I"lib:test" "/var/lib/gems/1.9.1/gems/rake-0.8.7/lib/rake/rake_test_loader.rb" "test/unit/release_test.rb" Loaded suite /var/lib/gems/1.9.1/gems/rake-0.8.7/lib/rake/rake_test_loader Started ...... Finished in 0.029335778 seconds. 6 tests, 6 assertions, 0 failures, 0 errors, 0 pendings, 0 omissions, 0 notifications 100% passed /usr/bin/ruby1.9.1 -I"lib:test" "/var/lib/gems/1.9.1/gems/rake-0.8.7/lib/rake/rake_test_loader.rb" /usr/bin/ruby1.9.1 -I"lib:test" "/var/lib/gems/1.9.1/gems/rake-0.8.7/lib/rake/rake_test_loader.rb" Loaded suite /var/lib/gems/1.9.1/bin/rake Started Finished in 0.000106717 seconds. 0 tests, 0 assertions, 0 failures, 0 errors, 0 pendings, 0 omissions, 0 notifications 0% passed Empty test suite.

    Read the article

  • How to invalidate the OutputCache in a webfarm?

    - by Pure.Krome
    Hi folks, i've got a website that uses OutputCache attribute to cache pages. Works great. Now, I'm in the middle of R&D'ing scaling up this site to be in a web farm. Along with the usual suspects for webfarm pain ... I've noticed (pretty quickly/obviously) that the OutputCache from Server_A doesn't invalidate the OutputCache from Server_B .. if a try and invalidate a single server's OutputCache. This makes total sense - how can S_A 'tell' S_B to invalidate when they are physically 2 seperate machines, etc? So - what are our options? Velocity? I understand this will move the caching to a different layer .. which means that the final result (output) will always be required to be determined .. as opposed to the OutputCache whic remembers the final output content (yes, varby gives different versions, etc.. which is totally fine). So even though the poco or business objects are all sync'd, there's still that last rendering effort required (even if it's tiny .. compared to the effort to generate/sync business objects). So yeah .. not sure of the options here and what other people do?

    Read the article

  • How to avoid chaotic ASP.NET web application deployment?

    - by emzero
    Ok, so here's the thing. I'm developing an existing (it started being an ASP classic app, so you can imagine :P) web application under ASP.NET 4.0 and SQLServer 2005. We are 4 developers using local instances of SQL Server 2005 Express, having the source-code and the Visual Studio database project This webapp has several "universes" (that's how we call it). Every universe has its own database (currently on the same server) but they all share the same schema (tables, sprocs, etc) and the same source/site code. So manually deploying is really annoying, because I have to deploy the source code and then run the sql scripts manually on each database. I know that manual deploying can cause problems, so I'm looking for a way of automating it. We've recently created a Visual Studio Database Project to manage the schema and generate the diff-schema scripts with different targets. I don't have idea how to put the pieces together I would like to: Have a way to make a "sync" deploy to a target server (thanksfully I have full RDC access to the servers so I can install things if required). With "sync" deploy I mean that I don't want to fully deploy the whole application, because it has lots of files and I just want to deploy those new or changed. Generate diff-sql update scripts for every database target and combine it to just 1 script. For this I should have some list of the databases names somewhere. Copy the site files and executing the generated sql script in an easy and automated way. I've read about MSBuild, MS WebDeploy, NAnt, etc. But I don't really know where to start and I really want to get rid of this manual deploy. If there is a better and easier way of doing it than what I enumerated, I'll be pleased to read your option. I know this is not a very specific question but I've googled a lot about it and it seems I cannot figure out how to do it. I've never used any automation tool to deploy. Any help will be really appreciated, Thank you all, Regards

    Read the article

  • Publish failed using Ant publisher (Eclipse/datanucleus).

    - by aronp
    Dear All, I am being driven mad the following (apparently hard) error from eclipse. Publish failed using Ant publisher Resource is out of sync with the file system: '/MyServlet/build/classes/com/inver/hotzones/database/BaseNetworkData.class'. I have seen comments on similar errors where refreshing eclipses view of the project helps but it is not helping me. Have tried cleaning the project, removing it from the webserver, deleting war files but cant seem to clear it. I have reset my TMPDIR variable so that it uses a directory on the same filesystem as that appeared to be another possible cause. The error occurs on classes which have been enhanced by datanuculeus. I have auto-enhance on the project. The other references to this problem indicate that it is due to Eclipses view of the project being out of step with the filesystem, and I am guessing that this has something to do with thedata nucleus enhancement. Any ideas? Thanks. I am using Eclipse 3.5.2 with latest datanucleus pluggins. Stack trace org.eclipse.core.runtime.CoreException: Resource is out of sync with the file system: '/MyServlet/build/classes/com/inver/hotzones/database/BaseNetworkData.class'. at org.eclipse.jst.server.generic.core.internal.publishers.AbstractModuleAssembler.copyModule(AbstractModuleAssembler.java:172) at org.eclipse.jst.server.generic.core.internal.publishers.WarModuleAssembler.assemble(WarModuleAssembler.java:31) at org.eclipse.jst.server.generic.core.internal.publishers.AntPublisher.assembleModule(AntPublisher.java:167) at org.eclipse.jst.server.generic.core.internal.publishers.AntPublisher.publish(AntPublisher.java:128) at org.eclipse.jst.server.generic.core.internal.GenericServerBehaviour.publishModule(GenericServerBehaviour.java:82) at org.eclipse.wst.server.core.model.ServerBehaviourDelegate.publishModule(ServerBehaviourDelegate.java:949) at org.eclipse.wst.server.core.model.ServerBehaviourDelegate.publishModules(ServerBehaviourDelegate.java:1039) at org.eclipse.wst.server.core.model.ServerBehaviourDelegate.publish(ServerBehaviourDelegate.java:872) at org.eclipse.wst.server.core.model.ServerBehaviourDelegate.publish(ServerBehaviourDelegate.java:708) at org.eclipse.wst.server.core.internal.Server.publishImpl(Server.java:2731) at org.eclipse.wst.server.core.internal.Server$PublishJob.run(Server.java:278) at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55)

    Read the article

  • Can a standalone ruby script (windows and mac) reload and restart itself?

    - by user30997
    I have a master-workers architecture where the number of workers is growing on a weekly basis. I can no longer be expected to ssh or remote console into each machine to kill the worker, do a source control sync, and restart. I would like to be able to have the master place a message out on the network that tells each machine to sync and restart. That's where I hit a roadblock. If I were using any sane platform, I could just do: exec('ruby', __FILE__) ...and be done. However, I did the following test: p Process.pid sleep 1 exec('ruby', __FILE__) ...and on Windows, I get one ruby instance for each call to exec. None of them die until I hit ^C on the window in question. On every platform I tried this on, it is executing the new version of the file each time, which I have verified this by making simple edits to the test script while the test marched along. The reason I'm printing the pid is to double-check the behavior I'm seeing. On windows, I am getting a different pid with each execution - which I would expect, considering that I am seeing a new process in the task manager for each run. The mac is behaving correctly: the pid is the same for every system call and I have verified with dtrace that each run is trigging a call to the execve syscall. So, in short, is there a way to get a windows ruby script to restart its execution so it will be running any code - including itself - that has changed during its execution? Please note that this is not a rails application, though it does use activerecord.

    Read the article

  • Presenting an image cropping interface

    - by wkw
    I'm trying to engineer a UI for cropping images in iphone OS and suspect I'm going about things the hard way. My goal is pretty much what the Tapbots duo have done with Pastebot. In that app, they dim the source image but provide a movable and resizable cropping view and the image you're cropping is in a zoomable scrollview; when you resize or move the underlying image, the cropping view adjusts appropriately. I mocked up a composite image which will give a sense of the design I'm after, along with how I presently have my view hierarchy setup, viewable here The approach I've started with is the following: UIImageView with the image to crop is in a scrollview, a plain UIView with black fill and suitable transparency/alpha setting is added in front of the imageview. I then use a custom UIView which is a sibling to the scrollview at a higher level, which implements the drawRect: method and for the most part calls CGImageCreateWithImageInRect to get the portion of the image's bitmap that matches the position of the cropping view and draws that to the CGContext. in the viewcontroller I'm using the UIScrollViewDelegate methods to track scrolling and passing those changes to the custom cropping UIView so it stays in sync with the scroll contentOffset. That much is finally working. But trying to keep in sync as the scrollview zoomScale changes is when I figured I should ask for help. Looking for suggestions or guidance. My initial approach just seems like more work than is required. Could this be done with a masking layer in the ImageView? And if so, how would I setup the tracking for moving and resizing the cropping rect? My experience working with layers is non-nil, but very limited thus far.

    Read the article

  • Serial: write() throttling?

    - by damian
    Hi everyone, I'm working on a project sending serial data to control animation of LED lights, which need to stay in sync with a sound engine. There seems to be a large serial write buffer (OSX (POSIX) + FTDI chipset usb serial device), so without manually restricting the transmission rate, the animation system can get several seconds ahead of the serial transmission. Currently I'm manually restricting the serial write speed to the baudrate (8N1 = 10 bytes serial frame per 8 bytes data, 19200 bps serial - 1920 bytes per second max), but I am having a problem with the sound drifting out of sync over time - it starts fine, but after 10 minutes there's a noticeable (100ms+) lag between the sound and the lights. This is the code that's restricting the serial write speed (called once per animation frame, 'elapsed' is the duration of the current frame, 'baudrate' is the bps (19200)): void BufferedSerial::update( float elapsed ) { baud_timer += elapsed; if ( bytes_written > 1024 ) { // maintain baudrate float time_should_have_taken = (float(bytes_written)*10)/float(baudrate); float time_actually_took = baud_timer; // sleep if we have > 20ms lag between serial transmit and our write calls if ( time_should_have_taken-time_actually_took > 0.02f ) { float sleep_time = time_should_have_taken - time_actually_took; int sleep_time_us = sleep_time*1000.0f*1000.0f; //printf("BufferedSerial::update sleeping %i ms\n", sleep_time_us/1000 ); delayUs( sleep_time_us ); // subtract 128 bytes bytes_written -= 128; // subtract the time it should have taken to write 128 bytes baud_timer -= (float(128)*10)/float(baudrate); } } } Clearly there's something wrong, somewhere. A much better approach would be to be able to determine the number of bytes currently in the transmit queue, and try and keep that below a fixed threshold. Any advice appreciated.

    Read the article

  • C# reference collection for storing reference types

    - by ivo s
    I like to implement a collection (something like List<T>) which would hold all my objects that I have created in the entire life span of my application as if its an array of pointers in C++. The idea is that when my process starts I can use a central factory to create all objects and then periodically validate/invalidate their state. Basically I want to make sure that my process only deals with valid instances and I don't re-fetch information I already fetched from the database. So all my objects will basically be in one place - my collection. A cool thing I can do with this is avoid database calls to get data from the database if I already got it (even if I updated it after retrieval its still up-to-date if of course some other process didn't update it but that a different concern). I don't want to be calling new Customer("James Thomas"); again if I initted James Thomas already sometime in the past. Currently I will end up with multiple copies of the same object across the appdomain - some out of sync other in sync and even though I deal with this using timestamp field on the MSSQL server I'd like to keep only one copy per customer in my appdomain (if possible process would be better). I can't use regular collections like List or ArrayList for example because I cannot pass parameters by their real local reference to the their existing Add() methods where I'm creating them using ref so that's not to good I think. So how can this be implemented/can it be implemented at all ? A 'linked list' type of class with all methods working with ref & out params is what I'm thinking now but it may get ugly pretty quickly. Is there another way to implement such collection like RefList<T>.Add(ref T obj)? So bottom line is: I don't want re-create an object if I've already created it before during the entire application life unless I decide to re-create it explicitly (maybe its out-of-date or something so I have to fetch it again from the db). Is there alternatives maybe ?

    Read the article

  • How can I access mainframe data with .Net applications and SQL Queries?

    - by orandov
    We have a large amount of data stored on an IBM mainframe using VSAM files. A lot of this data is dropped on the network every night in the form of text files to be processed and dumped into FoxPro and SQL Server databases. There are also many text files produced nightly by custom applications that get uploaded to the mainframe to keep everything in sync. Keeping the everything in sync is very tricky, to say the least. We are not getting rid of the mainframe any time soon and we would like to replace all the nightly batch processing with real time access to the mainframe data. We would like to be able to: Read data directly from the mainframe and produce reports based on it. Possibly using SQL queries. Read and Write data from custom .Net applications. We are not looking for a new platform to interface with the mainframe like Information Builders offers. We don't want to build application modules or reports with new "Business Intelligence" tools. We already know how to generate reports and write custom applications using SQL,.Net, Visual Studio, etc. All we are looking for is some sort of adapter to connect to our mainframe data. Any ideas are appreciated.

    Read the article

  • Is there such a thing as IMAP for podcasts?

    - by Gerrit
    Is there such a thing as IMAP for podcasts? I own a desktop, laptop, iPod, smartphone and a web-client all downloading StackOverflow Podcasts. (among others) They all tell me which episodes are available and which are already played. Everything is a horrible mess, ofcourse. My iPod is somewhat in sync with my desktop, but everything else is a random jungle. The same problem with e-mail is solved by IMAP. Every device gets content and meta-information from one server, and stays in sync with it. Per device, I can set preferences (do or do not download the complete archive including junkmail). Can we implement the IMAP approach for podcasts? Or is there a better metaphore/standard to solve this problem? How will the adoption-strategy look like? (by the way: except for the Windows smartphone, I own a full Apple-stack of products. Even then, I run into this problem) UPDATE The RSS-to-Imap link to sourceforge looks promesting, but very alpha/experimental. UPDATE 2 The one thing RSS is missing is the command/method/parameter/attribute to delete/unread items. RSS can only add, not remove. If RSS(N+1) (3?) could add a value for unread="true|false", it would be solved. If I cache all my RSS-feeds on my own server, and add the attribute myself, I only would have to convince iTunes and every other client to respect that.

    Read the article

  • How to best future proof my application that needs to connect to Outlook?

    - by Troy
    I have a contact management application written in Delphi which has a “Sync with Outlook” feature that I developed 10 years ago. Now, I’m going back to add some features and fix some bugs. This sync feature uses the Outlook object model to get started, but it has an optional mode called “Use MAPI Enhancements” where it uses pure MAPI to speed up how it looks for changes, and it allows notes to be synced w/ RTF instead of just plain text. I'm wondering if supporting two parallel paths of execution is a good idea or not. If I went with all MAPI, I believe I'd avoid some security prompts, and I'd avoid situations where anti-virus has "script-blocking" features which block my app from connecting to Outlook. But I believe that on the down side, my 32-bit app would not be able to to connect with 64-bit Outlook 2010 using MAPI. And I wonder about the future of MAPI in general. If I stick with the Outlook object model, will my 32-bit app be able to connect to the Outlook object model (since it's out of process COM)? If so, this is a compelling reason to keep my Outlook object model execution path in place. But if not, and if my app needs to be compiled for x64, then why not just go with pure MAPI?

    Read the article

  • Is nested synchronized block necessary?

    - by Dan
    I am writing a multithreaded program and I have a method that has a nested synchronized blocks and I was wondering if I need the inner sync or if just the outer sync is good enough. public class Tester { private BlockingQueue<Ticket> q = new LinkedBlockingQueue<>(); private ArrayList<Long> list = new ArrayList<>(); public void acceptTicket(Ticket p) { try { synchronized (q) { q.put(p); synchronized (list) { if (list.size() < 5) { list.add(p.getSize()); } else { list.remove(0); list.add(p.getSize()); } } } } catch (InterruptedException ex) { Logger.getLogger(Consumer.class.getName()).log(Level.SEVERE, null, ex); } } } EDIT: This isn't a complete class as I am still working on it. But essentially I am trying to emulate a ticket machine. The ticket machine maintains a list of tickets in the BlockingQueue q. Whenever a client adds a ticket to the machine, the machine also keeps track of the price of the last 5 tickets (ArrayList list)

    Read the article

  • How accurately (in terms of time) does Windows play audio?

    - by MusiGenesis
    Let's say I play a stereo WAV file with 317,520,000 samples, which is theoretically 1 hour long. Assuming no interruptions of the playback, will the file finish playing in exactly one hour, or is there some occasional tiny variation in the playback speed such that it would be slightly more or slightly less (by some number of milliseconds) than one hour? I am trying to synchronize animation with audio, and I am using a System.Diagnostics.Stopwatch to keep the frames matching the audio. But if the playback speed of WAV audio in Windows can vary slightly over time, then the audio will drift out of sync with the Stopwatch-driven animation. Which leads to a second question: it appears that a Stopwatch - while highly granular and accurate for short durations - runs slightly fast. On my laptop, a Stopwatch run for exactly 24 hours (as measured by the computer's system time and a real stopwatch) shows an elapsed time of 24 hours plus about 5 seconds (not milliseconds). Is this a known problem with Stopwatch? (A related question would be "am I crazy?", but you can try it for yourself.) Given its usage as a diagnostics tool, I can see where a discrepancy like this would only show up when measuring long durations, for which most people would use something other than a Stopwatch. If I'm really lucky, then both Stopwatch and audio playback are driven by the same underlying mechanism, and thus will stay in sync with each other for days on end. Any chance this is true?

    Read the article

< Previous Page | 146 147 148 149 150 151 152 153 154 155 156 157  | Next Page >