Search Results

Search found 9845 results on 394 pages for 'ntp servers'.

Page 353/394 | < Previous Page | 349 350 351 352 353 354 355 356 357 358 359 360  | Next Page >

  • .NET How would I build a DAL to meet my requirments?

    - by Jonno
    Assuming that I must deploy an asp.net app over the following 3 servers: 1) DB - not public 2) 'middle' - not public 3) Web server - public I am not allowed to connect from the web server to the DB directly. I must pass through 'middle' - this is purely to slow down an attacker if they breached the web server. All db access is via stored procedures. No table access. I simply want to provide the web server with a ado dataset (I know many will dislike this, but this is the requirement). Using asmx web services - it works, but XML serialisation is slow and it's an extra set of code to maintain and deploy. Using a ssh/vpn tunnel so that the one connects to the db 'via' the middle server, seems to remove any possible benefit of maintaining 'middle'. Using WCF binary/tcp removes the XML problem, but still there is extra code. Is there an approach that provides the ease of ssh/vpn, but the potential benefit of having the dal on the middle server? Many thanks.

    Read the article

  • Google Web Toolkit or Microsoft Technology (Silverlight, ASP.NET)

    - by NativeByte
    We have a large code base in MFC and VB. A few applications are in .NET. All these applications interoperate with each other on the user's machine and also connect with Unix servers via sockets. Recently we have started discussing a re-write of our applications and possibility of moving a lot of these desktop applications to web (they would run in intranet). A straight forward way is rewritting them in one of the .NET technologies. But a suggestion about using Google Web tookit has popped up and the argument is that it would help creating applications that would run in a browser on both desktop and mobile devices. One of the key problem that I see is that GWT is a large abstraction over Javascript. This will require the team to learn GWT, Javascript, IDEs etc as their experience has been primarily Microsoft technologies and not Java. It would be easier for them to learn .NET technologies instead of GWT. I do not have a depth of GWT and its drawback pittfalls and do not know about a parallel Microsoft Technology that I should investigate. So I would appreciate if people here can share their views or experiences using GWT or equivalent Microsoft technology.

    Read the article

  • Refreshing Facebook session from an iframe application

    - by zombat
    I've got a Facebook iframe application that is completely external. By this I mean that once a user accesses the canvas URL to load the application, all the links in the iframe app go to my servers, and the canvas page never gets refreshed unless the user navigates to somewhere else on Facebook and comes back (or does a browser refresh). On the initial load of the app where Facebook creates the iframe, I get passed all the usual parameters like fb_sig_user which allows me to create an internal app session based on the facebook user. This app session (which is not the Facebook session, it's my own app session) is all I need to allow the user to work with the app. The problem comes an hour later. If the user leaves the computer, or uses the app for more than an hour, the Facebook session expires. There are some app pages which require fetching friend information, and once the FB session has expired, these pages break, throwing out errors such as "Error: Session key invalid or no longer valid". My question is whether there is a way to refresh the user's Facebook session from within an iframe application to keep it from expiring an hour later. Do any of the API calls do this? Is there a Facebook Connect trick to ping something? Is there any definitive method to keep it alive? I haven't been able to find any examples that specifically address this.

    Read the article

  • What are some good ways to store performance statistics in a database for querying later?

    - by Nathan
    Goal: Store arbitrary performance statistics of stuff that you care about (how many customers are currently logged on, how many widgets are being processed, etc.) in a database so that you can understand what how your servers are doing over time. Assumptions: A database is already available, and you already know how to gather the information you want and are capable of putting it in the database however you like. Some Ideal Attributes of a Solution Causes no noticeable performance hit on the server being monitored Has a very high precision of measurement Does not store useless or redundant information Is easy to query (lends itself to gathering/displaying useful information) Lends itself to being graphed easily Is accurate Is elegant Primary Questions 1) What is a good design/method/scheme for triggering the storing of statistics? 2) What is a good database design for how to actually store the data? Example answers...that are sort of vague and lame... 1) I could, once per [fixed time interval], store a row of data with all the performance measurements I care about in each column of one big flat table indexed by timestamp and/or server. 2) I could have a daemon monitoring performance stuff I care about, and add a row whenever something changes (instead of at fixed time intervals) to a flat table as in #1. 3) I could trigger either as in #2, but I could store information about each aspect of performance that I'm measuring in separate tables, opening up the possibility of adding tons of rows for often-changing items, and few rows for seldom-changing items. Etc. In the end, I will implement something, even if it's some super-braindead approach I make up myself, but I'm betting there are some really smart people out there willing to share their experiences and bright ideas!

    Read the article

  • Is JavaEE really portable?

    - by Bozho
    I'm just implementing a JavaEE assignment I was given on an interview. I have some prior experience with EJB, but nothing related to JMS and MDBs. So here's what I find through the numerous examples: application servers bind their topics and queues to different JNDI names - for example topic/queue, jms the activationConfig property is required on JBoss, while in the Sun tutorial it is not. after starting my application, jboss warns me that my topic isn't bound (it isn't actually - I haven't bound it, but I expect it to be bound automatically - in fact, in an example for JBoss 4.0 automatic binding does seem to happen). A suggested solution is to map it in some jboss files or even use jboss-specific annotations. This might be just JBoss, but since it is certified to implement to spec, it appears the spec doesn't specify these these things. And there all the alleged portability vanishes. So I wonder - how come it is claimed that JavaEE is portable and you can take an ear and deploy it on another application server and it magically runs, if such extremely basic things don't appear to be portable at all. P.S. sorry for the rant, but I'm assume I might be doing/getting something wrong, so state your opinions.

    Read the article

  • Stubbing an ActsAs Rails Plugin

    - by Rabbott
    I need to create a plugin much like Authlogic (or even just add on to Authlogic), but due to requirements beyond my control I need my plugin to authenticate using SOAP. Basically the plugin would require that anyone accessing the controller (before_filter would be fine) would have to authenticate first. I have ZERO control over the login page, or the SOAP server, I am simply a client attempting to authenticate to the providers SOAP Web Service. Here is what happens.. before_filter realizes that no session[:credential] is set, and forwards the user to the url on the providers servers. The user enters their credentials, and once authenticated, the web service forwards the user to a URL that has been entered by their sysadmins, attaching a token to the url on its way back. I need to take that token, append it to some parameters stored in a local YAML file, and make the SOAP call to the providers server. If all goes as planned, I need to set session[:credential] to the result of the SOAP call, and forward the user to the root page. Subsequent calls to the before_filter will not make the SOAP call, because session[:credential] is set. Ideally I think this would be awesome to slap on top of Authlogic, but I'm not sure how to do this, So I started to create my own acts_as_soap_authentic plugin, which isn't causing errors, but doesn't do anything.. Anyone have any pointers, or tips as to how I can get the ball rolling here? It seems simple, but is proving not to be..

    Read the article

  • joining / merging two arrays

    - by Shishant
    I have two arrays like this, actually this is mysql data retrieved from two different servers: $array1 = array ( 0 => array ( 'id' => 1, 'name' => 'somename') , 1 => array ( 'id' => 2, 'name' => 'somename2') ); $array2 = array ( 0 => array ( 'thdl_id' => 1, 'otherdate' => 'spmethings') , 1 => array ( 'thdl_id' => 2, 'otherdate' => 'spmethings22') ); how can i join / merge array so it looks like this $new_array = array ( 0 => array ( 'id' => 1, 'name' => 'somename', 'otherdate' => 'spmethings') , 1 => array ( 'id' => 2, 'name' => 'somename2', 'otherdate' => 'spmethings22') );

    Read the article

  • How to migrate primary key generation from "increment" to "hi-lo"?

    - by Bevan
    I'm working with a moderate sized SQL Server 2008 database (around 120 tables, backups are around 4GB compressed) where all the table primary keys are declared as simple int columns. At present, primary key values are generated by NHibernate with the increment identity generator, which has worked well thus far, but precludes moving to a multiprocessing environment. Load on the system is growing, so I'm evaluating the work required to allow the use of multiple servers accessing a common database backend. Transitioning to the hi-lo generator seems to be the best way forward, but I can't find a lot of detail about how such a migration would work. Will NHibernate automatically create rows in the hi-lo table for me, or do I need to script these manually? If NHibernate does insert rows automatically, does it properly take account of existing key values? If NHibernate does take care of thing automatically, that's great. If not, are there any tools to help? Update NHibernate's increment identifier generator works entirely in-memory. It's seeded by selecting the maximum value of used identifiers from the table, but from that point on allocates new values by a simple increment, without reference back to the underlying database table. If any other process adds rows to the table, you end up with primary key collisions. You can run multiple threads within the one process just fine, but you can't run multiple processes. For comparison, the NHibernate identity generator works by configuring the database tables with identity columns, putting control over primary key generation in the hands of the database. This works well, but compromises the unit of work pattern. The hi-lo algorithm sits inbetween these - generation of primary keys is coordinated through the database, allowing for multiprocessing, but actual allocation can occur entirely in memory, avoiding problems with the unit of work pattern.

    Read the article

  • Getting a lightweight installation of java eclipse.

    - by liam
    Having dealt with yet another stupid eclipse problem, I want to try to get the lightest, most minimal eclipse installation as possible. To be clear, I use eclipse for two things: - Editing Java - Debugging Java Everything else I do through emacs/zsh (editing jsp/xml/js, file management, svn check-in, etc). I have not found any aspect of working in eclipse to do these tasks to be efficient or even reliable, so I do not want plug-ins that relate to it. From the eclipse.org site, this is the lightest install of eclipse that they have, and I don't want any of those things (bugzilla, mylyn, cvs, xml_ui), and have actually had problems with each of them even though I do not use them. So what is the minimal build I can get that will: 1) Ignore svn metadata 2) Includes the full-featured editor (intellisense and type-finding) 3) Includes the full-featured debugger (standard eclipse/jdk) Does not have any extra plug-ins, platforms, or "integrations" with other platforms, specifically, I don't want to deal with plug-ins relating to: Maven, JSP Validation, Javascript editing or validation, CVS or SVN, Mylyn, Spring or Hibernate "natures", app servers like a bundled tomcat/glassfish/etc, J2EE tools, or anything of the like. I do primarily spring/hibernate/web-mvc apps, and have never dealt with an eclipse plug-in that handles any of it gracefully, I can work effectively with my own toolset, but eclipse extensions do nothing but get in the way. I have worked with plain eclipse up to Ganymede, MyEclipse (up to 7.5), and the latest version of Spring-SourceTools, and find that they are all saddled with buggy useless plug-ins (though the combination is always different). Switching to netbeans/intellij is not an option, and my teammates work with svn-controlled .class/.project files, so it pretty much has to be eclipse. Does anyone have any good advice on how I can save a few grey hairs?

    Read the article

  • ant ftp task "Could not date test remote file"

    - by avok00
    Hi guys! I am using Ant ftp task to deploy my project files to a remote app server. Ant is not able to detect the date of the remote file and it re-uploads all files every time. When I start Ant in debug mode it says: [ftp] checking date for mailer.war [ftp] Could not date test remote file: mailer.war assuming out of date. The remote server is MS FTP (Windows Vista version) Ant version is 1.8.2; I use commons-net-2.2 and jakarta-oro-2.0.8 (could not find newer version) My ant task looks like this <!-- Deploy new and changed files --> <target name="deploy" depends="package" description="Deploy new and changed files"> <ftp server="localhost" userid="" password="" action="send" depends="yes" passive="true" systemTypeKey="WINDOWS" serverTimeZoneConfig="Europe/Sofia" defaultDateFormatConfig="MMM dd yyyy" recentDateFormatConfig="MMM dd HH:mm" binary="true" retriesAllowed="3" verbose="true"> <fileset dir="${webapp.artefacts.path}"/> </ftp> </target> I read an article here: Ant:The definitive guide that says I need a version of jakarta oro AFTER 2.0.8 to talk to MS FTP servers, but I was not able to find such version. Jakarta oro site - http://jakarta.apache.org/oro/ says the oro project is retired as of 2010, but their latest distribution is from 2003! Please, can anyone help me with this? Any solution or any alternatives to the Ant ftp task? Thanks!

    Read the article

  • Cleaning up a sparsebundle with a script

    - by nickg
    I'm using time machine to backup some servers to a sparse disk image bundle and I'd like to have a script to clean up the old back ups and re-size the image once space has been freed up. I'm fairly certain the data is protected because if I delete the old backups by right clicking, I have to enter a password to be able to delete them. To allow my script to be able to delete them, I've been running it as root. For some reason, it just won't run and every file it tries to delete I get rm: /file/: Operation not permitted Here is what I have as my script: #!/bin/bash for server in servername; do /usr/bin/hdiutil attach -mountpoint /path/to/mountpoint /path/to/sparsebundle/$server.sparsebundle/; /bin/sleep 10; /usr/bin/find /path/to/mountpoint -type d -mtime +7 -exec /bin/rm -rf {} \; /usr/bin/hdiutil unmount /path/to/mountpoint; /bin/sleep 10; /usr/bin/hdiutil compact /path/to/sparsebundle/$server.sparsebundle/; done exit; One of the problems I thought was causing this was it needed to have a mountpoint specified since the default mount was to /Volumes/Time\ Machine\ Backups/ that's why I created a mountpoint. I also thought that it was trying to delete the files to quickly after mounting and it wasn't actually mounted yet, that's why I added the sleep. I've also tried using the -delete option for find instead of -exec, but it didn't make any difference. Any help on this would be greatly appreciated because I'm out of ideas as to why this won't work.

    Read the article

  • Opaque tenant identification with SQL Server & NHibernate

    - by Anton Gogolev
    Howdy! We're developing a nowadays-fashionable multi-tenanted SaaS app (shared database, shared schema), and there's one thing I don't like about it: public class Domain : BusinessObject { public virtual long TenantID { get; set; } public virtual string Name { get; set; } } The TenantID is driving me nuts, as it has to be accounted for almost everywhere, and it's a hassle from security standpoint: what happens if a malicious API user changes TenantID to some other value and will mix things up. What I want to do is to get rid of this TenantID in our domain objects altogether, and to have either NHibernate or SQL Server deal with it. From what I've already read on the Internets, this can be done with CONTEXT_INFO (here's a NHibernatebased implementation), NHibernate filters, SQL Views and with combination thereof. Now, my requirements are as follows: Remove any mentions of TenantID from domain objects ...but have SQL Server insert it where appropriate (I guess this is achieved with default constraints) ...and obviously provide support for filtering based on this criteria, so that customers will never see each other's data If possible, avoid SQL Server views. Have a solution which plays nicely with NHibernate, SQL Servers' MARS and general nature of SaaS apps being highly concurrent What are your thoughts on that?

    Read the article

  • Caching Authentication Data

    - by PartlyCloudy
    Hi, I'm currently implementing a REST web service using CouchDB and RESTlet. The RESTlet layer is mainly for authentication and some minor filtering of the JSON data served by CouchDB: Clients <= HTTP = [ RESTlet <= HTTP = CouchDB ] I'm using CouchDB also to store user login data, because I don't want to add an additional database server for that purpose. Thus, each request to my service causes two CouchDB requests conducted by RESTlet (auth data + "real" request). In order to keep the service as efficent as possible, I want to reduce the number of requests, in this case redundant requests for login data. My idea now is to provide a cache (i.e.LRU-Cache via LinkedHashMap) within my RESTlet application that caches login data, because HTTP caching will probabily not be enough. But how do I invalidate the cache data, once a user changes the password, for instance. Thanks to REST, the application might run on several servers in parallel, and I don't want to create a central instance just to cache login data. Currently, I save requested auth data in the cache and try to auth new requests by using them. If a authentication fails or there is now entry available, I'll dispatch a GET request to my CouchDB storage in order to obtain the actual auth data. So in a worst case, users that have changed their data will perhaps still be able to login with their old credentials. How can I deal with that? Or what is a good strategy to keep the cache(s) up-to-date in general? Thanks in advance.

    Read the article

  • ASP.NET Webforms site using HTTPCookie with 100 year timeout times out after 20 minutes

    - by Rob
    I have a site that is using Forms Auth. The client does not want the site session to expire at all for users. In the login page codebehind, the following code is used: // user passed validation FormsAuthentication.Initialize(); // grab the user's roles out of the database String strRole = AssignRoles(UserName.Text); // creates forms auth ticket with expiration date of 100 years from now and make it persistent FormsAuthenticationTicket fat = new FormsAuthenticationTicket(1, UserName.Text, DateTime.Now, DateTime.Now.AddYears(100), true, strRole, FormsAuthentication.FormsCookiePath); // create a cookie and throw the ticket in there, set expiration date to 100 years from now HttpCookie cookie = new HttpCookie(FormsAuthentication.FormsCookieName, FormsAuthentication.Encrypt(fat)) { Expires = DateTime.Now.AddYears(100) }; // add the cookie to the response queue Response.Cookies.Add(cookie); Response.Redirect(FormsAuthentication.GetRedirectUrl(UserName.Text, false)); The web.config file auth section looks like this: <authentication mode="Forms"> <forms name="APLOnlineCompliance" loginUrl="~/Login.aspx" defaultUrl="~/Course/CourseViewer.aspx" /> </authentication> When I log into the site I do see the cookie correctly being sent to the browser and passed back up: However, when I walk away for 20 minutes or so, come back and try to do anything on the site, the login window reappears. This solution was working for a while on our servers - now it's back. The problem doesn't occur on my local dev box running Cassini in VS2008. Any ideas on how to fix this?

    Read the article

  • How to run White + SL4 UATs through TeamCity?

    - by Duncan Bayne
    After experiencing a series of unpleasant issues with TFS, including source code orruption and project management inflexibility, we (meaning the project team of which I'm a part) have decided to move from TFS 2010 to TeamCity + SVN + V1. I've managed to get our MSTest component and unit tests running as part of every build. However, our UATs are failing, and I was hoping for some advice from the TeamCity community as to best practices w.r.t. running web servers and interacting with the desktop. Each of our UAT fixtures starts a web server to host the site, like this: public static void StartWebServer() { var pathToSite = @"C:\projects\myproject\FrontEnd\MyProject.FrontEnd.Web"; var webServer = new Process { StartInfo = new ProcessStartInfo { Arguments = string.Format("/port:9150 /path:\"{0}\"", pathToSite), FileName = @"C:\Program Files (x86)\Common Files\microsoft shared\DevServer\10.0\WebDev.WebServer40.EXE" } }; webServer.Start(); } Needless to say, this doesn't work when running through TeamCity, as the pathToSite value is different each time. I'm hoping there is a way of determining the path into which the the code is checked out prior to building? That would allow me to point the web server at the right place. The other issue is that our UATs use White to drive the Silverlight UI through an instance of Internet Explorer: _browserWindow = InternetExplorer.Launch("http://localhost:9150/index.html#/Home", "Home - Windows Internet Explorer"); _document = _browserWindow.SilverlightDocument; I've ensured that the TeamCity service is granted the ability to interact with the desktop, and I've set the build agent machine up to log in automatically (an open session is a pre-requisite for White to work properly). Is that all I need to do or are there additional steps required?

    Read the article

  • Creating Emulated iSCSI Target in a Lab/Testing Environment using Windows Server 2008 R2

    - by Brian McCleary
    We have a single server running Windows Server 2008 with Hyper-V installed running 5 virtual machines. I have purchased a second DELL R805 Server so that we can create a fail-over cluster to our current R805 that is currently in production. Right now, our R805 connects via iSCSI to a MD3000i iSCSI SAN. Before we try to roll out the second server and clustering to our production environment, I want to be able to test and "play with" the clustering features in our lab before rolling it out. The problem is that I don't want to spend a couple thousand dollars on another iSCSI SAN server just for testing. I already have two servers in my lab that are installed with Windows Server 2008 R2 64bit (one is the R805 and another spare desktop that was laying around) and with the Hyper-V roll enabled that should be ready to test with, but I don't have an iSCSI target to use as the Cluster Shared Volume. Is there anyway to install, either on a Hyper-V image or on a external spare computer that we have some sort of emulated iSCSI target? In our lab, we obviously don't need a real SAN, just something that we can test out how to setup the clustering properly outside of our production environment. Any advise is appreciated. FYI - I have read Jose Barret's blog on WUDSS at http://blogs.technet.com/josebda/archive/2008/01/07/installing-the-evaluation-version-of-wudss-2003-refresh-and-the-microsoft-iscsi-software-target-version-3-1-on-a-vm.aspx, but it seems awfully complex. I'm hoping for an easier solution.

    Read the article

  • Microsoft products such as Visual Studio 2010 does not require to enter serial number

    - by MainMa
    Hi, I am member of WebsiteSpark and was member of DreamSpark. Both programs enable to download software and provide serial keys to use. Some software like Windows Server has an ISO file to download and a serial number displayed on the website which I must enter during installation. Some other software does not have any serial key. For example, when I downloaded Visual Studio 2010, there was just a link to an ISO file. During installation, there was no such a field as serial number (whereas Visual Studio 2008 had this field at the beginning of installation process). There is the same thing with SQL Server 2008 and Microsoft Expression Studio 3. Even when I've downloaded the public trial RTM version of Windows Seven Enterprise, there were no serial number to enter. I don't think that such expensive products as SQL Server 2008 Enterprise are delivered without serials and online validation, so I suppose that the serial is embedded into the product itself, either in installation binaries or in a separate config file, so is already in the ISO I download so I do not have to enter it. So my question is, how it is done technically? Is each 2 GBs ISO generated on-demand on the server to embed a serial each time this ISO is requested? I suppose that if it is done, it has a huge impact on servers performance (no caching, no streaming...), so what may be the techniques used behind? I want to implement the same feature in a product I intend to ship (to simplify installation by avoiding to ask to enter serial number), but I really don't see how to do it with low impact on server performance.

    Read the article

  • How to deploy a number of disparate project types?

    - by niteice
    This question is similar to http://stackoverflow.com/questions/1900269/whats-the-best-way-to-deploy-an-executable-process-on-a-web-server. The situation is this: I'm developing a product that needs to be deployed to a web server. It consists of 4 website projects, a background service, a couple of command-line tools, and two assemblies shared by all of these components. Now, I also happen to administer the server that this product will be deployed on. So I'm familiar with everything that may need to be done to perform an update: Copy website files Replace the service binary Install updated components in the GAC Configure IIS Update database schema After some research it seems that, to reduce deployment time and to be able to let the other sysadmins handle deployment, I want to deploy all of these as an MSI, except that I don't know a thing about installers. I know VS can generate web deployment projects, but where do I go from there? Being able to simply click Next a few times on an installer is my goal for deploying updates. It would also be nice to modularize it, so for example, I could distribute the four websites among multiple servers and have everything appear as individual components in the installer, and as one entity in Add/Remove Programs. Is all of this too much to ask in a single package?

    Read the article

  • how could application installations/configurations be easier in linux? [closed]

    - by ajsie
    although you can do anything in linux it tends to require a lot of tweaking in config files and reading a lot of manuals/tutorials before you can have it running in your way. i know that it gets a lot easier by time, and the apt-get installations with ubuntu/debian is heading the right way. but how can linux be more userfriendly for us in the future? i thought that if more is automated like an IDE environment, eg. typing svn will give us all the commands and description about each command when you move between commands with your keyboard. that would be great. but that's just one example. another is the navigation in the terminal between folders. now you have to type a lot just to jump from/to different folders. would be great with some more automatization here too. i know that these extra features will slow down the server, but its 2010 now, and these features are not that heavy for the cpu, but makes it more userfriendly and encourage maintainance of a server, not frighten u off. what do you think about this? should/could we have more user friendly linux environment in servers, something that has annoyed you a lot? a lot of things are done in the unix way, but maybe we should reinvent the wheel in some areas, cause apparently, its so...repeatingly today and difficult to do easy tasks. it should be easier i think..

    Read the article

  • Google Analytics Script inside XML File

    - by mnml
    Hi, I would like to know If I can add my ecommerce analytics tags inside an xml formated file? <?xml version="1.0" encoding="UTF-8" standalone="yes" ?> <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-XXXXX-X']); _gaq.push(['_trackPageview']); _gaq.push(['_addTrans', '1234', // order ID - required 'Acme Clothing', // affiliation or store name '11.99', // total - required '1.29', // tax '5', // shipping 'San Jose', // city 'California', // state or province 'USA' // country ]); // add item might be called for every item in the shopping cart // where your ecommerce engine loops through each item in the cart and // prints out _addItem for each _gaq.push(['_addItem', '1234', // order ID - required 'DD44', // SKU/code 'T-Shirt', // product name 'Green Medium', // category or variation '11.99', // unit price - required '1' // quantity - required ]); _gaq.push(['_trackTrans']); //submits transaction to the Analytics servers (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(ga); })(); </script>

    Read the article

  • How to automatically split git commits to separate changes to a single file

    - by Hercynium
    I'm just plain stuck as to how to accomplish this, or if it's even possible. Even it it can be done, I wonder if it could be setting us up for a messed-up, unmanageable repository. I have set up two branches of the code-base. One is "master" and the other is "prod". The HEAD of prod is always the latest code in production, and master is the main development branch. Here's the problem, though: We're converting from CVS here at $work and most of the developers are still getting used to git. Their CVS workflow involved tagging versions of individual files for production, then updating the servers using the tag. Unfortunately, this has let to sloppy practices like committing unrelated changes together and then tagging the files after-the-fact... and the devs want to know how they can do the following: In their local repos, they hack and commit to their hearts' delight, then at the end of the day, be able to run a command that takes a list of files whose commits over the day get merged with their local prod - and only those files - even if those commits combine changes to other files. I know how to split commits with git rebase --interactive, but I have no clue how I would automate splitting commits at all, never mind the way I want to. I do realize the simplest thing would be to just tell them to switch the their prod branches, checkout the files from their master branches into the working tree then commit to prod. My problem with that is losing the history of their commits over the day.

    Read the article

  • What's a way for a client to automatically resolve the ip address of a server?

    - by zooropa
    The project I am working on is a client/server architecture. In a LAN environment, I want the client's to be able to automatically determine the server's address. I want to avoid having to manually configure each client with the ip address of the server. What is the best way to do this? Some alternatives I have thought about doing are: The server could listen for broadcast packets from the clients. The message from the client would be a request for the IP address of the server. The server would respond with its address. The machine running my project's server could also have a bind server running. The LAN's router could be configured to use it as one of its DNS servers. I think I saw that there is a bind library. Does that mean I can build the bind service into my server so that bind doesn't have to be installed on the server? Any other ideas? What have you done in the past? What are the pros/cons of these approaches and others that might be suggested? Thanks for your help!

    Read the article

  • Pros and cons of each JEE server for developing within Eclipse

    - by Thorbjørn Ravn Andersen
    Eclipse JEE has a lot of server adapters allowing development against many different application servers like JBoss, Glassfish and WebSphere. Frequently you can benefit from using another server for developing new features than for production, simply because it may be able to deploy changes much faster and when the functionality is in place, you can iron out bugs for the production platform. Unfortunately finding that server is a time consuming process, where others experience is invaluable. If you have experience with any server with an Eclipse Server Adapter, please add your findings and your recommendation. I believe that the following is of interest: Does saving a file trigger an update in the server, giving save edit+reload browser functionality? How fast is a deployment? (Saved a JSP? Java class? Static file?) Can the actual server be downloaded by the Server Adapter Wizard allowing for easy installation? Are there known bugs and issues with suitable work-arounds? Is debugging fully supported? Is profiling? Would you recommend this server? Note: Eclipse can also work with Tomcat but that is a web container, which cannot deploy EAR files.

    Read the article

  • Free solution for automatic updates with .NET/C#?

    - by a2h
    Yes, from searching I can see this has been asked time and time again. Here's a backstory. I'm an individual hobbyist developer with zero budget. A program I've been developing has been in need of constant bugfixes, and me and users are getting tired of having to manually update. Me, because my current solution of Manually FTP to my website Update a file "newest.txt" with the newest version Update index.html with a link to the newest version Hope for people to see the "there's an update" message Have them manually download the update sucks, and whenever I screw up an update, I get pitchforks. Users, because, well, "Are you ever going to implement auto-update?" "Will there ever be an auto-update feature?" Over the past few hours I have looked into: http://windowsclient.net/articles/appupdater.aspx - I can't comprehend the documentation http://www.codeproject.com/KB/vb/Auto_Update_Revisited.aspx - Doesn't appear to support anything other than working with files that aren't in use http://wyday.com/wyupdate/ - wyBuild isn't free, and the file specification is simply too complex. Maybe if I was under a company paying me I could spend the time, but then I may as well pay for wyBuild. http://www.kineticjump.com/update/default.aspx - Ditto. ClickOnce - Workarounds for implementing launching on startup are massive, horrendous and not worth it for such a simple feature. Publishing is a pain; manual FTP and replace of all files is required for servers without FrontPage Extensions. I'm pretty much ready to throw in the towel right now and strangle myself. And then I think about Sparkle...

    Read the article

  • How do I test is storage-conf is being loaded in Cassandra 0.7.3?

    - by user657253
    I have installed Cassandra and gotten it working on two machines. I have followed the instructions to hook them up to each other by configuring the storage-conf.xml files. Both machines respond well to thrift and to command line cassandra. This is tutorial I used to setup the storage-conf.xml files. The tutorial says that if I run netstat, I should NOT see Cassandra bound to 127.0.0.1 on my seed node. I should see it bound to my internal IP, which I have configured in the storage-conf.xml file. I have rebooted the servers and relaunched cassandra. Still, I see the localhost address insead of the correct internal IP address. Is it that my .yaml file is overriding the storage-conf.xml file? If so, how do I delete the appropriate things in the .yaml? Or how do I tell Cassandra to look for my storage-conf.xml file? A few things I have tried: renaming the cassandra.yaml file. What happens is that cassandra will not load. If i rename the storage-conf.xml, cassandra does load. When I installed Cassandra, it did not come with a storage-conf.xml file. I had to grab it off the apache wiki.

    Read the article

< Previous Page | 349 350 351 352 353 354 355 356 357 358 359 360  | Next Page >