Search Results

Search found 15041 results on 602 pages for 'breaking changes'.

Page 117/602 | < Previous Page | 113 114 115 116 117 118 119 120 121 122 123 124  | Next Page >

  • What can stop IIS7 from restarting an ASP.NET app when uppdating a dll in the bin folder?

    - by Carl Björknäs
    We're running ASP.NET 2.0 on MS Server 2008 and IIS 7. During the last releases the app pool hasn't automatically been restarted after changes in the bin folder. It works like a charm on our test server but not on the live server. The site is browsable but runs with the logic of the old version of the updated dll. One of the changes we have done lately is that one of the dll:s in the bin folder consists of other dlls that have been merged with ILMerge. Interop.ADODB.dll and Interop.CDO.dll is included in the merged dll. It is the user dll of the merged dll that is updated. What can possibly hinder IIS from restarting the app pool although a file has changed in the bin folder?

    Read the article

  • Plugged in, not charging&ndash;Windows 7

    - by Kelly Jones
    Just a quick post on something I ran into lately with my Dell Precision M4500 laptop (monster laptop!).  I noticed the little icon in the system tray for the power options was stating that it was “plugged in, not charging”.  I don’t know why it was stating this, but I quickly found a fix for it on the net. I found the fix in this forum on CNET.  Here’s the fix: In order to correct problems with the battery's power management software, follow the steps below. 1. Click Start and type device in the search field, then select Device Manager . 2. Expand the Batteries category. 3. Under the Batteries category, right-click the Microsoft ACPI Compliant Control Method Battery listing, and select Uninstall . WARNING: Do not remove the Microsoft AC Adapter driver or any other ACPI compliant driver. 4. On the Device Manager taskbar, click Scan for hardware changes . Alternately, select Action > Scan for hardware changes . Windows will scan your computer for hardware that doesn't have drivers installed, and will install the drivers needed to manage your battery's power. The notebook should now indicate that the battery is charging.   And it did work.

    Read the article

  • How to color highlight text in a black/white document easily

    - by Lateron
    I am editing a 96 chapter book. The text is in normal black letters on a white background. What I want to be able to do is: I want to have any changes or additions automatically shown in another (per-selected) color. Without having to a)high lite a word or phrase to be changed or added and then b)going to the toolbar and clicking on the font color In other words I want the original color of the text to remain as it is and any additions or changes to be visible in another color without having to use the toolbar. Can this be done? I use OpenOffice or Word 2007 in Windows 7.

    Read the article

  • Reformatting and version control

    - by l0b0
    Code formatting matters. Even indentation matters. And consistency is more important than minor improvements. But projects usually don't have a clear, complete, verifiable and enforced style guide from day 1, and major improvements may arrive any day. Maybe you find that SELECT id, name, address FROM persons JOIN addresses ON persons.id = addresses.person_id; could be better written as / is better written than SELECT persons.id, persons.name, addresses.address FROM persons JOIN addresses ON persons.id = addresses.person_id; while working on adding more columns to the query. Maybe this is the most complex of all four queries in your code, or a trivial query among thousands. No matter how difficult the transition, you decide it's worth it. But how do you track code changes across major formatting changes? You could just give up and say "this is the point where we start again", or you could reformat all queries in the entire repository history. If you're using a distributed version control system like Git you can revert to the first commit ever, and reformat your way from there to the current state. But it's a lot of work, and everyone else would have to pause work (or be prepared for the mother of all merges) while it's going on. Is there a better way to change history which gives the best of all results: Same style in all commits Minimal merge work ? To clarify, this is not about best practices when starting the project, but rather what should be done when a large refactoring has been deemed a Good Thing™ but you still want a traceable history? Never rewriting history is great if it's the only way to ensure that your versions always work the same, but what about the developer benefits of a clean rewrite? Especially if you have ways (tests, syntax definitions or an identical binary after compilation) to ensure that the rewritten version works exactly the same way as the original?

    Read the article

  • Best way: restructure an existing Team Foundation Server (TFS) solution

    - by dhh
    In my department we are developing several smaller AddOns for some unified communication server. For versioning and distributed development we use a Team Foundation Server 2012. But: there is only one large TFS solution for all of our applications and libraries: Main Solution Applications App 1 App 2 App 3 Externals Libraries Lib 1 Lib 2 Tools The "Application" path contains all main applications. Those are not depending on each other, but they depend on the Libraries and Externals projects. The "Externals" path contains some external DLLs referenced in our Applications and Libraries. The Libraries path contains commonly used libs (UI templates, Helper classes, etc.). They do not depend on each other and they are referenced in the Libraries and the Tools projects. The Tools path contains some helper programs like setup helpers, update web services, etc. Now, there's some major points why I'd like to change this structure: We can't use server builds. It's uncomfortable to manage TFS scrum management with sprints, impediments, etc. with a solution structure like that. Every developer always has access to all projects in the solution. A complete build lasts too long if one accidentally hits [F6] in Visual Studio... What would you change in this solution? How would you break those projects into smaller Solutions, how should those solutions be structured. My first approach would be, to create one TFS project for each Application, Library and Tool. But how can I ensure that e.g. App 2 always contains the newest version of Lib 1? Do I have to monitor changes on Lib 1 and update App 2 manually as soon as the Lib changes? Or can I somehow force Visual Studio to always use the newest version of an external project somehow?

    Read the article

  • "Turn Off The Lights" on any website?

    - by gojira
    On Youtube, there is this nice button (easy to overlook - top left of the video) which lets one "turn off the lights": the site background changes from white to black, the text color changes from black to grey. There is an unrelated plug-in for Firefox called "Turned Off The Lights", which has a very similar functionality. This makes websites so much easier to read. However, both technologies only work on YouTube. Is there anything to achive the same effect for all websites? Preferably with Firefox? I.e.: I want to have very dark background and light text color on all websites viewed with Firefox, how can I do that?

    Read the article

  • CSOM (Client Side Object Model) - What's new with SharePoint 2013

    - by KunaalKapoor
    SharePoint CSOMThe Client-Side Object Model or CSOM came out with SharePoint 2010. CSOM is accessible through client.svc but all client.svc calls must go through supported WFC entry points (supported entry points are .NET, Silverlight and JavaScript). So a developer would need to use client side proxy objects exposed by either a .NET assembly or a JavaScript library. Changes with SharePoint 2013REST Capabilities - Direct access to client.svcNew APIs - App ModelREST CapabilitiesOne of the most important changes to the CSOM with SharePoint 2013 is that the web service entry point of client.svc has been extended to allow direct access  via REST-Based web service calls. This is a really critical change since its going to make the SharePoint platform accessible to any other platform, opening the horizons of integration and collaboration with other REST based platforms and devices. OData (a really popular standard data access API for HTTP-based clients) is supported similar to 2010 but will be a more important aspect of SharePoint 2013 development.New API'sCSOM for SharePoint 2013 has been buffed up with several new APIs for not only SharePoint server functionality but also an API for Windows Phone applications. For a SharePoint 2010 farm most of the new APIs mentioned below are available only via server side APIs:SearchTaxonomyPublishingWorkflowUser ProfilesE-DiscoveryAnalyticsBusiness DataIRMFeedsSharePoint 2013 remote APIs being accessible through both CSOM and REST is very important to the new app model where developers can no longer run code in a SharePoint environment nor can they access the server-side APIs. So CSOM plays the savior here.Also, you can now substitute the alias '_api' in order to reference '_vti_bin/client.svc'.

    Read the article

  • Single-developer GIT workflow (moving from straightforward FTP)

    - by melat0nin
    I'm trying to decide whether moving to VCS is sensible for me. I am a single web developer in a small organisation (5 people). I'm thinking of VCS (Git) for these reasons: version control, offsite backup, centralised code repository (can access from home). At the moment I work on a live server generally. I FTP in, make my edits and save them, then reupload and refresh. The edits are usually to theme/plugin files for CMSes (e.g. concrete5 or Wordpress). This works well but provides no backup and no version control. I'm wondering how best to integrate VCS into this procedure. I would envisage setting up a Git server on the company's web server, but I'm not clear how to push changes out to client accounts (usually VPSes on the same server) - at the moment I simply log into SFTP with their details and make the changes directly. I'm also not sure what would sensibly represent a repository - would each client's website get their own one? Any insights or experience would be really helpful. I don't think I need the full power of Git by any means, but basic version control and de facto cloud access would be really useful.

    Read the article

  • Trac/SVN to DVCS Migration

    - by quanticle
    The project I'm currently working on is using Trac, with SVN integration. It's worked great until now. Now, however, we've taken on some additional developers and we're running into issues with branching and merging. Because of this, I think a move to a distributed version control system is in order. The problem is that Trac is very closely integrated with the SVN repository. We have tight integration between the tickets and the revision numbers of code changes corresponding to those tickets. In addition we have a support wiki that has a lot of data that helps the tech. support team. Is there a way we can migrate to git or mercurial without losing the benefits of Trac? I've looked at the git plugin for Trac, and I'm unsure of how well it works. Has anyone here used it with a project that's been migrated from SVN? EDIT: I should note that the most important priority for us is maintaining the links between Trac tickets and the corresponding changesets in SVN. That's a tool that we use every day, and it provides an easy way to jump to code changes when reviewing tickets. Wiki migration would be nice to have, but if it's not possible, we can continue to run the old system whilst we write some kind of a one-off script to migrate the content.

    Read the article

  • Why have we got so many Linux distributions? [closed]

    - by nebukadnezzar
    Pointed to from an answer to another question, I came across this graphic, and I'm shocked how many linux distributions currently exist. However, it seems that most of these distributions are forks of already popular distributions with minimal changes, usually limited to themes, wallpapers and buttons. It would still seem easier to create a sub-distribution with the required changes, such as XUbuntu with XFCE4, KUbuntu with KDE4, Fluxbuntu with Fluxbox, etc. In my mind there are a number of problems with having so many distributions - perhaps less security/stability due to smaller group of developers, and also the confusingly vast range of choice for newcomers to Linux. Some reasons that developers might decide to fork are: Specializing on a particular topic (work-related topic - i.e. for a Hospital, etc) An exceptional architecture, that requires a special set of software Use of non-FOSS, proprietary technology, and such So what other reasons are there that have caused so many people decided to create their own distributions? What are the thought processes that have led to this? And are these "valid" reasons - do we need so many distributions? If you can back your experiences up with references that would be great.

    Read the article

  • find kernel config option in menuconfig

    - by puchu
    I've upgraded my gentoo-sources today to 3.3.8, and now I am looking at diff between old kernel's defconfig and old kernel's .config: there are about 20 changes. I want to apply this changes manually to new kernel's menuconfig. Where can I find tool like: menuconfig-find -v 3.3.8-gentoo CONFIG_KVM_AMD >> Virtualization >> -> Kernel-based Virtual Machine (KVM) support >> -> KVM for AMD processors support

    Read the article

  • Needing to concatenate between cells that change daily. I want to be able to automate this vs manual

    - by Harold
    I use concatenate to pull data together from different cells in my spreadsheet. Since my data changes daily, I want the formula to also change daily without having to manually input the new cell in the concatenate formula. I am looking for a way to do this but not sure how. Can anyone out there help me out please!? I appreciate the assistance in advance! Maybe this will help to explain what I need. I have a row of data from D4:AH4 that I insert daily based on the new day. When I use the concatenate and us the following formula: =CONCATENATE(TEXT('Raw Data'!B4,"m/d")," ",TEXT('Raw Data'!C4,"")," ",TEXT('Raw Data'!E4,"0.0%"))... E4 being the cell that changes daily where next day would be F4, G4, etc... All other parts of the formula will stay the same. I hope this helps! Thanks! :)

    Read the article

  • Advice on reconciling discordant data

    - by Justin
    Let me support my question with a quick scenario. We're writing an app for family meal planning. We'll produce daily plans with a target calorie goal and meals to achieve it for our nuclear family. Our calorie goal will be calculated for each person from their attributes (gender, age, weight, activity level). The weight attribute is the simplest example here. When Dad (the fascist nerd who is inflicting this on his family) first uses the application he throws approximate values into it for Daughter. He thinks she is 5'2" (157 cm) and 125 lbs (56kg). The next day Mom sits down to generate the menu and looks back over what the bumbling Dad did, quietly fumes that he can never recall anything about the family, and says the value is really 118 lbs! This is the first introduction of the discord. It seems, in this scenario, Mom is probably more correct that Dad. Though both are only an approximation of the actual value. The next day the dear Daughter decides to use the program and sees her weight listed. With the vanity only a teenager could muster she changes the weight to 110 lbs. Later that day the Mom returns home from a doctor's visit the Daughter needed and decides that it would be a good idea to update her Daughter's weight in the program. Hooray, another value, this time 117 lbs. Now how do you reconcile these data points? Measurement error, confidence in parties, bias, and more all confound the data. In some idealized world we'd have a weight authority of some nature providing the one and only truth. How about in our world though? And the icing on the cake is that this single data point changes over time. How have you guys solved or managed this conflict?

    Read the article

  • Windows Task Scheduler

    - by Zulakis
    i am trying to deploy a auto-starting program with Administrator Priviliges on our XP-SP1-machines. For this, i am using the Windows Task Scheduler. Since most of our machines get deployed by using a PXE-imaging-system, the Task fails because the Administrator user entered is for example r126/Administrator. If i only enter Administrator then it automatically changes to machinename/Administrator. Since the machinenames are automatically changes by the imaging-system, the tasks fail run. Any ideas on how to fix that?

    Read the article

  • Applying Service Pack 1 to Team Foundation Server 2010

    - by Enrique Lima
    Disclosure:  I performed the following activities on my Windows 7 SP1 system, Visual Studio 2010 SP1 and a local Basic installation of TFS 2010. As with any deployment of a service pack into a server environment, take your recommended precautions and be aware of the changes you are putting in.  With that said, make sure you backup your databases, and that you have an exit/rollback strategy in the event of an unexpected situation. Team Foundation Server 2010 Service Pack 1 corresponds to KB2182621.  The KB article is http://support.microsoft.com/kb/2182621 The process will be very simple to follow, you will need to execute the mu_team_foundation_server_2010_sp1_x86_x64_651711.exe file.  That will extract files needed and launch the wizard driven Installation. Once this process completes, you need to validate the changes. By looking at Team Foundation Server 2010 Administration Console, you should see the reference to the KB number and SP1. There is also a good reason to validate log locations and records. From the Team Foundation Server 2010 Administration Console. Or from Windows Explorer, go to the C:\ProgramData\Microsoft\Team Foundation\Server Configuration\Logs location and review the logs referenced by the servicing references.

    Read the article

  • Let's do the Time Warp again!

    - by Mike Dietrich
    Once you start reading about Daylight Saving Time changes in MyOracleSupport you'll find still a lot of notes explaining this and that and back and forth. But sometimes there seems to be a bit too much information - and lacking clear instructions. Once a customer called that the "Time Zone Spaghetti" after reading MOS notes about DST for several hours ending up with the note where he has begun to read before still not clear what to do now I'm using usually the scripts from MOS Note:977512.1 as you'll just have to exchange the DST version you are upgrading to and it has everything you need to check and adjust the time zone data in the database - for instance after applying the DST V18 patch to your database's homes. As a reminder to myself when traveling I have stored a copy of the script part of that note here - and please note that this is not an official Oracle version. Always read and check the original MOS Note:977512.1 as it may have gotten changed in between and may contain changes or corrections and as it has a lot of more explainationary information than I could cover here. And credit to Gunter Vermeir from Oracle Support, who is the owner of that MOS Note and has compiled all that useful stuff together. DST_prepare.sql DST_adjust.sql

    Read the article

  • Is having a single `IndexWriter` instance in Lucene a good idea?

    - by Dragos
    I am trying to understand how Lucene should be used. From what I have read, creating an IndexReader is costly, so using a Search Manager shoulg be the right choice. However, a SearchManager should be produced by a NRTManager(which, by the way, should replace the IndexWriter for every add or delete operation performed). But in order to have a NRTManager, I should first have an IndexWriter, and here comes my problem. The documentation says: an IndexWriter is thread-safe the constructor of this class takes a Directory object, so it seems creating an instace should be costly(as in the case of an IndexReader) all changes are buffered and flushed periodically(so they seem to encourage using a single instance) but: the changes, although flushed will only be visible after commit or close after finished making updates(add/delete), the instance should be closed I also found this: http://stackoverflow.com/questions/5374419/forgot-to-close-the-lucene-indexwriter-after-adding-documents-to-the-index where it is said that not closing a writer might ruin everything So what am I really supposed to do? Is having a single IndexWriter instance a good idea (make only commit and never close it)? EDIT: What is more, if I use NRTManager, how can I make acommit`? Is it even possible?

    Read the article

  • Non-dynamic CMS [closed]

    - by user20457
    Some of the web sites I visit every day (news, sports, etc..), although the content changes very often (several times per day), the URLs always have .html extension, what makes me thing that the content has been generated once, and then published as a static page, rather than generated in every call, or even cached in memory. For example, the fictitious site "mysports.com" have a "futbol.html" page, and then yesterday Messi gets injured and they have another thing to put in that page, then I presume they post the new item in their CMS system, and automatically a publishing action is triggered aftewards that recreates "futbol.html" in a CDN with the new item and probably discard the oldest one. Then the ETag changes and clients will get the new page if they try to access it. (the site is fictitious but this is what I believe happened yesterday in the sports site I read) This would fit in the CQRS approach, and I presume they have a huge performance. I know lots of CMS (WP, Drupal, BlogEngine.net, DNN, etc...), but I have never seen any able of doing this, or at least, I was not aware this feautre. How are called those distributed CMS? Which are the most well known? Cheers.

    Read the article

  • Control cell reference increment when dragging a forumula in Libre Office Calc (3.5)

    - by Chuck
    Using Libre Office Calc (3.5) and have a question. When copying a formula that references cells into multiple empty cells the default is to increment each cell reference by one column or row, depending on the direction that the formula is being drug. A formula '= 1 + A1' drug horizontally changes to '= 1 + B1' when pulled one cell to right and '=1 + A2' when pulled one cell down. Is there a way to control increase the increment of the referenced cell? Is is possible to have a formula '= 1 + A1' that effectively changes to '= 1 + A3' when drug down one cell, '= 1 + A5' when drug down two cells, etc? If it matters, I am trying to take a constantly updating master list of data that is organized by dates (Wednesdays and Saturdays) and create separate spread sheets for each day of the week that can be updated by only pulling down the formula into the next cell. My attempts at using the 'lookup' function, 'offset' function, and creating a sort column in Libre Office Calc are thwarted by my inability to figure out how to get around the single step increment when pulling a formula down into the next cell. Thanks

    Read the article

  • Smart backup software

    - by gisek
    I use a laptop on daily basis. As I have a lot of important data there it would be nice to do backups of some directories every day. Can you recommend a specific application that would take care of it? Maybe there is an app that would instantly commit changes I make in a directory on my laptop to the backup folder? The important thing is that I have some big files (a few GB's) that have some minor changes very often. I'm talking about VirtualBox disk images. It would be nice if the software could handle it smartly. Also notice that I'd like to store it on an external usb HDD, which sometimes isn't plugged in.

    Read the article

  • How do I interpolate air drag with a variable time step?

    - by Valentin Krummenacher
    So I have a little game which works with small steps, however those steps vary in time, so for example I sometimes have 10 Steps/second and then I have 20 Steps/second. This changes automatically depending on how many steps the user's computer can take. To avoid inaccurate positioning of the game's player object I use y=v0*dt+g*dt^2/2 to determine my objects y-position, where dt is the time since the last step, v0 is the velocity of my object in the beginning of my step and g is the gravity. To calculate the velocity in the end of a step I use v=v0+g*dt what also gives me correct results, independent of whether I use 2 steps with a dt of for example 20ms or one step with a dt of 40ms. Now I would like to introduce air drag. For simplicity's sake I use a=k*v^2 where a is the air drag's acceleration (I am aware that it would usually result in a force, but since I assume 1kg for my object's mass the force is the same as the resulting acceleration), k is a constant (in this case I'm using 0.001) and v is the speed. Now in an infinitely small time interval a is k multiplied by the velocity in this small time interval powered by 2. The problem is that v in the next time interval would depend on the drag of the last which again depends on the v of the last interval and so on... In other words: If I use a=k*v^2 I get different results for my position/velocity when I use 2 steps of 20ms than when I use one step of 40ms. I used to have this problem for my position too, but adding +g*dt^2/2 to the formula for my position fixed the problem since it takes into account that the position depends on the velocity which changes slightly in every infinitely small time interval. Does something like that exist for air drag too? And no, I dont mean anything like Adding air drag to a golf ball trajectory equation or similar, for that kind of method only gives correct results when all my steps are the same. (I hope you can understand my intermediate english, it's not my main language so I would like to say sorry for all the silly mistakes I might have made in my question)

    Read the article

  • Updating wordpress in a multi-node environment

    - by Peter
    I'm finding this very tricky in a multi node environment, with code under revision control. AKA. multiple frontends and single database. I have a deployment process that pushes a git repo to the servers, but obviously if I update Wordpress from within the admin panel, it will update the files to one FE. Then I would need to copy over the new files to the other FE nodes. Plus, whenever these changes are written when Wordpress updates on a node, it writes code into the git repo. As such, it then breaks the auto deploys that perform 'git pulls', as it then has untracked changes and refuses to pull in new deploys unless manually intervened. How does one easily keep Wordpress updated in a multi node (load balanced) environment?

    Read the article

  • Cached css/javascript files on Sun Java System Web Server

    - by Derp
    I'm doing front-end web development in a Solaris 10 / Sun Java System Web Server 7.0U2 environment. I have noticed that changes to static css or javascript files often do not take effect immediately, whereas changes to static html files always do. My best guess is that a default setting in the web server causes it to cache certain file types in order to provide reasonable performance out of the box. I don't have the admin server running--I'll need to edit the config files by hand. What change(s) can I make so that all of my css and javascript edits take effect immediately? Thanks!

    Read the article

  • How can my team avoid frequent errors after refactoring?

    - by SDD64
    to give you a little background: I work for a company with roughly twelve Ruby on Rails developers (+/- interns). Remote work is common. Our product is made out of two parts: a rather fat core, and thin up to big customer projects built upon it. Customer projects usually expand the core. Overwriting of key features does not happen. I might add that the core has some rather bad parts that are in urgent need of refactorings. There are specs, but mostly for the customer projects. The worst part of the core are untested (as it should be...). The developers are split into two teams, working with one or two PO for each sprint. Usually, one customer project is strictly associated with one of the teams and POs. Now our problem: Rather frequently, we break each others stuff. Some one from Team A expands or refactors the core feature Y, causing unexpected errors for one of Team B's customer projects. Mostly, the changes are not announced over the teams, so the bugs hit almost always unexpected. Team B, including the PO, thought about feature Y to be stable and did not test it before releasing, unaware of the changes. How to get rid of those problems? What kind of 'announcement technique' can you recommend me?

    Read the article

< Previous Page | 113 114 115 116 117 118 119 120 121 122 123 124  | Next Page >