Search Results

Search found 1748 results on 70 pages for 'oum releases 6 1'.

Page 51/70 | < Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >

  • How to upgrade a remote server from 8.10 to newer version?

    - by DisgruntledGoat
    I have a remote server still running Ubuntu 8.10 9.04 that I can only access via SSH. If I run apt-get update I get a bunch of 404 errors on the packages. I've asked a few questions on Server Fault but got nowhere. Here's what I've done: Run apt-get update which returns errors like: Err http://gb.archive.ubuntu.com intrepid/main Packages 404 Not Found [and same for many other packages] Run do-release-upgrade which returns: Checking for a new ubuntu release Failed Upgrade tool signature Failed Upgrade tool Done downloading extracting 'jaunty.tar.gz' Failed to extract Extracting the upgrade failed. There may be a problem with the network or with the server. Edited /etc/update-manager/release-upgrades and changed from Prompt=normal to Prompt=lts (as suggested here). Running do-release-upgrade after this returns: Checking for a new ubuntu release current dist not found in meta-release file No new release found (Updated) I have followed the advice in this question and changed /etc/apt/sources.list to refer to jaunty instead of intrepid. However, that distro is not online anymore either. A comment there says I have to upgrade in chronological order... So basically, it seems like I cannot upgrade because my current distro is out of date and not supported. Is there a way to upgrade direct to 10.x or 11.x? Note, as this is a server I only have command-line access. UPDATE 24/11: I have managed to upgrade from 8.10 to 9.04. Ubuntu's EOL Upgrades page provides some alternate URLs for apt sources. I also needed to update /var/lib/update-manager/meta-release to point to the old-releases server too. However, now I cannot upgrade from 9.04 to 9.10. Running do-release-upgrade produces the same error as #2 above, except it "Failed to fetch" (the URLs in meta-release are valid). The Ubuntu Jaunty upgrade page says it's necessary to upgrade using a CD image. I followed the instructions here, but it didn't work: A fatal error occurred Please report this as a bug and include the files /var/log/dist-upgrade/main.log and /var/log/dist-upgrade/apt.log in your report. The upgrade is now aborted. Your original sources.list was saved in /etc/apt/sources.list.distUpgrade. Traceback (most recent call last): File "/tmp/tmp.JLhTwVUugb/karmic", line 7, in sys.exit(main()) File "/tmp/tmp.JLhTwVUugb/DistUpgradeMain.py", line 132, in main if app.run(): File "/tmp/tmp.JLhTwVUugb/DistUpgradeController.py", line 1590, in run return self.fullUpgrade() File "/tmp/tmp.JLhTwVUugb/DistUpgradeController.py", line 1506, in fullUpgrade if not self.doPostInitialUpdate(): File "/tmp/tmp.JLhTwVUugb/DistUpgradeController.py", line 762, in doPostInitialUpdate self.quirks.run("PostInitialUpdate") File "/tmp/tmp.JLhTwVUugb/DistUpgradeQuirks.py", line 83, in run for plugin in self.plugin_manager.get_plugins(condition): File "/tmp/tmp.JLhTwVUugb/computerjanitor/plugin.py", line 167, in get_plugins filenames = self.get_plugin_files() File "/tmp/tmp.JLhTwVUugb/computerjanitor/plugin.py", line 120, in get_plugin_files basenames = [x for x in os.listdir(dirname) OSError: [Errno 2] No such file or directory: './plugins' It does say to report the bug, but since this is an old unsupported release I don't know if it's worth doing. However, is there a way round this, to upgrade from 9.04 to 9.10 (And then finally to 10.04 LTS.)

    Read the article

  • Top Three Reasons to Move to the Cloud Before Your Next Upgrade

    - by yaldahhakim
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} 1) Reduced Cost - During major upgrades, most organizations typically need to replace or invest in extra hardware and other IT resources to support the upgrade. With the Cloud, this can become more of an Op-ex discussion. The flexibility and scalability of the cloud also allows for new business solution to be set up more quickly with the ability to scale IT resources to closely map to changing business requirements. . This enables more and faster innovation because you are spending money to focus on core business initiatives instead of setting up complex environments. 2) Reduced Risk- This is especially true when you are working with a cloud provider that possesses substantial in-house expertise. Oracle Managed Cloud Services has been hosting and managing customer’s business applications for over a decade and has help hundreds of customers upgrade and adopt new technologies faster and better. Customer have access to over 15,000 Oracle experts in operation centers around the world that can work around the clock and have direct access Oracle Development to optimize our customers’ upgrade experience. 3) Reduced Downtime - Whether a customer is looking to upgrade their E-Business Suite, PeopleSoft, JD-Edwards, or Fusion applications, we’ve developed standardized best practices and tools across the technology stack to accelerate the upgrade and migration with substantially reduced timelines and risk. And because the process is repeatable, customer stay more current on the latest releases, continuously taking advantage of the newest innovations – without the headache.. By leveraging the economies and expertise of scale that belong to Oracle, you can sleep better at night knowing that your next major application upgrade is taken care of. Check out the video of this Managed Cloud Services customer to learn more about their experience.

    Read the article

  • Dual monitor not working completely in 12.10 after upgrade

    - by Mark Baldridge
    At 12.04, dual monitors worked perfectly. After upgrading to 12.10, the primary monitor works, the second monitor only partly works. I am sure there is some difference between the releases that I have missed setting properly. System settings - Displays show both correctly as Acer 22" monitors at 1680x1050 (16:10). An icon on monitor 2 is present, but elongated; almost an artifact, since other icons on the primary screen are absent, but this one icon is there on th second monitor. Selecting the icons on both screens exist. Painting is weird on monitor 2. Launcher exists and works on both screens, but even with sticky edges off, the cursor stops at the left edge of monitor 2. Clicking on text editor on screen 2 launcer will launch gedit there. If I drag it, it leaves a trail of after images like repaint is failing. If I drive the cursor on the launcher, the help tags like "LibreOffice Writer" appear, but stay on screen unless I drag the active gedit window over them. Then part of the help bubbles are overwritten, leaving behind after images of the gedit window on screen. What is really fascinating is that the System settings - Displays is now ignoring monitor selection, after allowing it earlier. Just before this, the help popup which said "Select a monitor to change its properties; drag to rearrange its placement" actually let me do that. Maybe a trick of where I grab the edge of the monitor in the Displays setting. I just found a working handle. When I drag monitor 1 to the right of monitor 2, "Apply" and confirm, both monitors work normally (although the right monitor lets the cursor slide off the right edge onto the left edge of monitor 1 - which sounds correct). Painting of windows does not leave an after image. However, success is only temporary. The setting survives the reboot, but painting on the left monitor, now monitor 2, now replicates the issues from before. The after image of the gedit window and the small window for "Are you sure you want to close all programs and restart the computer?" are still on monitor 2 (on the left now), even though they are not real windows, nor do they have processes behind them. Curiously, in Displays, the "green" monitor on the left in the display window is matched by the right monitor color in the monitor upper left corner. Probably makes sense as the one on the right is now monitor 1. If I repeat the "drag the left monitor to the right of the right monitor on the "Displays" window, things are oriented properly, with no display artifacts as I drag windows around either screen. Also the description bubbles that pop up are overwritten on both screens, so none of those artifacts either. This goodness does not survive a reboot, however. Have not tried logging out and back in. All of this after positing that the motherboard VGA and HDMI ports could have been the issue. So, I installed an e-GeForce 7600 GT Dual DVI (I know the web thinks it is not DVI, but VGA, but the connectors are DVI). No change to the weird behavior. The good parts continue to work, the weirdness also works, and swapping monitor positions seems to cure the issue. So, is there a setting I have missed? Given "swapping" monitor 1 and 2 on the System Settings... - Displays makes it work, just not across boot, I suspect so.

    Read the article

  • Using Private Extension Galleries in Visual Studio 2012

    - by Jakob Ehn
    Note: The installer and the complete source code is available over at CodePlex at the following location: http://inmetavsgallery.codeplex.com   Extensions and addins are everywhere in the Visual Studio ALM ecosystem! Microsoft releases new cool features in the form of extensions and the list of 3rd party extensions that plug into Visual Studio just keeps growing. One of the nice things about the VSIX extensions is how they are deployed. Microsoft hosts a public Visual Studio Gallery where you can upload extensions and make them available to the rest of the community. Visual Studio checks for updates to the installed extensions when you start Visual Studio, and installing/updating the extensions is fast since it is only a matter of extracting the files within the VSIX package to the local extension folder. But for custom, enterprise-specific extensions, you don’t want to publish them online to the whole world, but you still want an easy way to distribute them to your developers and partners. This is where Private Extension Galleries come into play. In Visual Studio 2012, it is now possible to add custom extensions galleries that can point to any URL, as long as that URL returns the expected content of course (see below).Registering a new gallery in Visual Studio is easy, but there is very little documentation on how to actually host the gallery. Visual Studio galleries uses Atom Feed XML as the protocol for delivering new and updated versions of the extensions. This MSDN page describes how to create a static XML file that returns the information about your extensions. This approach works, but require manual updates of that file every time you want to deploy an update of the extension. Wouldn’t it be nice with a web service that takes care of this for you, that just lets you drop a new version of your VSIX file and have it automatically detect the new version and produce the correct Atom Feed XML? Well search no more, this is exactly what the Inmeta Visual Studio Gallery Service does for you :-) Here you can see that in addition to the standard Online galleries there is an Inmeta Gallery that contains two extensions (our WIX templates and our custom TFS Checkin Policies). These can be installed/updated i the same way as extensions from the public Visual Studio Gallery. Installing the Service Download the installler (Inmeta.VSGalleryService.Install.msi) for the service and run it. The installation is straight forward, just select web site, application pool and (optional) a virtual directory where you want to install the service.   Note: If you want to run it in the web site root, just leave the application name blank Press Next and finish the installer. Open web.config in a text editor and locate the the <applicationSettings> element Edit the following setting values: FeedTitle This is the name that is shown if you browse to the service using a browser. Not used by Visual Studio BaseURI When Visual Studio downloads the extension, it will be given this URI + the name of the extension that you selected. This value should be on the following format: http://SERVER/[VDIR]/gallery/extension/ VSIXAbsolutePath This is the path where you will deploy your extensions. This can be a local folder or a remote share. You just need to make sure that the application pool identity account has read permissions in this folder Save web.config to finish the installation Open a browser and enter the URL to the service. It should show an empty Feed page:   Adding the Private Gallery in Visual Studio 2012 Now you need to add the gallery in Visual Studio. This is very easy and is done as follows: Go to Tools –> Options and select Environment –> Extensions and Updates Press Add to add a new gallery Enter a descriptive name, and add the URL that points to the web site/virtual directory where you installed the service in the previous step   Press OK to save the settings. Deploying an Extension This one is easy: Just drop the file in the designated folder! :-)  If it is a new version of an existing extension, the developers will be notified in the same way as for extensions from the public Visual Studio gallery: I hope that you will find this sever useful, please contact me if you have questions or suggestions for improvements!

    Read the article

  • Rethinking Oracle Optimizer Statistics for P6 Part 2

    - by Brian Diehl
    In the previous post (Part 1), I tried to draw some key insights about the relationship between P6 and Oracle Optimizer Statistics.  The first is that average cardinality has the greatest impact on query optimization and that the particular queries generated by P6 are more likely to use this average during calculations. The second is that these are statistics that are unlikely to change greatly over the life of the application. Ultimately, our goal is to get the best query optimization possible.  Or is it? Stability No application administrator wants to get the call at 9am that their application users cannot get there work done because everything is running slow. This is a possibility with a regularly scheduled nightly collection of statistics. It may not just be slow performance, but a complete loss of service because one or more queries are optimized poorly. Ideally, this should not be the case. The database optimizer should make better decisions with more up-to-date data. Better statistics may give incremental performance benefit. However, this benefit must be balanced against the potential cost of system down time.  It is stability that we ultimately desire and not absolute optimal performance. We do want the benefit from more accurate statistics and better query plans, but not at the risk of an unusable system. As a result, I've developed the following methodology around managing database statistics for the P6 database.  1. No Automatic Re-Gathering - The daily, weekly, or other interval of statistic gathering is unlikely to be beneficial. Quite the opposite. It is more likely to cause problems. 2. Smart Re-Gathering - The time to collect statistics is when things have changed significantly. For a new installation of P6, this is happening more often because the data is growing from a few rows to thousands and more. But for a mature system, the data is not changing significantly from week-to-week. There are times to collect statistics: New releases of the application Changes in the underlying hardware or software versions (ex. new Oracle RDBMS version) When additional user groups are added. The new groups may use the software in significantly different ways. After significant changes in the data. This may be monthly, quarterly or yearly.  3. Always Test - If you take away one thing from this post, it would be to always have a plan to test after changing statistics. In reality, statistics can be collected as often as you desire provided there are tests in place to verify that performance is the same or better. These might be automated tests or simply a manual script of application functions. 4. Have a Way Out - Never change the statistics without a way to return to the previous set. Think of the statistics as one part of the overall application code that also includes the source code--both application and RDBMS. It would be foolish to change to the new code without a way to get back to the previous version. In the final post, I will talk about the actual script I created for P6 PMDB and possible future direction for managing query performance. 

    Read the article

  • Appropriate design / technologies to handle dynamic string formatting?

    - by Mark W
    recently I was tasked with implementing a way of adding support for versioning of hardware packet specifications to one of our libraries. First a bit of information about the project. We have a hardware library which has classes for each of the various commands we support sending to our hardware. These hardware modules are essentially just lights with a few buttons, and a 2 or 4 digit display. The packets typically follow the format {SOH}AADD{ETX}, where AA is our sentinel action code, and DD is the device ID. These packet specs are different from one command to the next obviously, and the different firmware versions we have support different specifications. For example, on version 1 an action code of 14 may have a spec of {SOH}AADDTEXT{ETX} which would be AA = 14 literal, DD = device ID, TEXT = literal text to display on the device. Then we come out with a revision with adds an extended byte(s) onto the end of the packet like this {SOH}AADDTEXTE{ETX}. Assume the TEXT field is fixed width for this example. We have now added a new field onto the end which could be used to say specify the color or flash rate of the text/buttons. Currently this java library only supports one version of the commands, the latest. In our hardware library we would have a class for this command, say a DisplayTextArgs.java. That class would have fields for the device ID, the text, and the extended byte. The command class would expose a method which generates the string ("{SOH}AADDTEXTE{ETX}") using the value from the class. In practice we would create the Args class as needed, populate the fields, call the method to get our packet string, then ship that down across the CAN. Some of our other commands specification can vary for the same command, on the same version, depending on some runtime state. For example, another command for version 1 may be {SOH}AA{ETX}, where this action code clears all of the modules behind a specific controller device of their text. We may overload this packet to have option fields with multiple meanings like {SOH}AAOC{ETX} where OC is literal text, which tells the controller to only clear text on a specific module type, and to leave the others alone, or the spec could also have an option format of {SOH}AADD{ETX} to clear the text off a a specific device. Currently, in the method which generates the packet string, we would evaluate fields on the args class to determine which spec we will be using when formatting the packet. For this example, it would be along the lines of: if m_DeviceID != null then use {SOH}AADD{ETX} else if m_ClearOCs == true then use {SOH}AAOC{EXT} else use {SOH}AA{ETX} I had considered using XML, or a database to store String.format format strings, which were linked to firmware version numbers in some table. We would load them up at startup, and pass in the version number of the hardwares firmware we are currently using (I can query the devices for their firmware version, but the version is not included in all packets as part of the spec). This breaks down pretty quickly because of the dynamic nature of how we select which version of the command to use. I then considered using a rule engine to possibly build out expressions which could be interpreted at runtume, to evaluate the args class's state, and from that select the appropriate format string to use, but my brief look at rule engines for java scared me away with its complexity. While it seems like it might be a viable solution, it seems overly complex. So this is why I am here. I wouldn't say design is my strongest skill, and im having trouble figuring out the best way to approach this problem. I probably wont be able to radically change the args classes, but if the trade off was good enough, I may be able to convince my boss that the change is appropriate. What I would like from the community is some feedback on some best practices / design methodologies / API or other resources which I could use to accomplish: Logic to determine which set of commands to use for a given firmware version Of those command, which version of each command to use (based on the args classes state) Keep the rules logic decoupled from the application so as to avoid needing releases for every firmware version Be simple enough so I don't need weeks of study and trial and error to implement effectively.

    Read the article

  • Why does this Maven command produce a LifecycleExecutionException?

    - by ovr
    COMMAND: mvn org.apache.maven.plugins:maven-archetype-plugin:2.0-alpha-4:generate \ -DarchetypeGroupId=org.beardedgeeks \ -DarchetypeArtifactId=gae-eclipse-maven-archetype \ -DarchetypeVersion=1.1.2 \ -DarchetypeRepository=http://beardedgeeks.googlecode.com/svn/repository/releases ERROR: + Error stacktraces are turned on. [INFO] Scanning for projects... [INFO] ------------------------------------------------------------------------ [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] Internal error in the plugin manager getting plugin 'org.apache.maven.plugins:maven-archetype-plugin': Plugin 'org.apache.maven .plugins:maven-archetype-plugin:2.0-alpha-4' has an invalid descriptor: 1) Plugin's descriptor contains the wrong group ID: net.kindleit 2) Plugin's descriptor contains the wrong artifact ID: maven-gae-plugin 3) Plugin's descriptor contains the wrong version: 0.5.9 [INFO] ------------------------------------------------------------------------ [INFO] Trace org.apache.maven.lifecycle.LifecycleExecutionException: Internal error in the plugin manager getting plugin 'org.apache.maven.plugins: maven-archetype-plugin': Plugin 'org.apache.maven.plugins:maven-archetype-plugin:2.0-alpha-4' has an invalid descriptor: 1) Plugin's descriptor contains the wrong group ID: net.kindleit 2) Plugin's descriptor contains the wrong artifact ID: maven-gae-plugin 3) Plugin's descriptor contains the wrong version: 0.5.9 at org.apache.maven.lifecycle.DefaultLifecycleExecutor.verifyPlugin(DefaultLifecycleExecutor.java:1544) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.getMojoDescriptor(DefaultLifecycleExecutor.java:1851) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.segmentTaskListByAggregationNeeds(DefaultLifecycleExecutor.java:462) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.execute(DefaultLifecycleExecutor.java:175) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:328) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:138) at org.apache.maven.cli.MavenCli.main(MavenCli.java:362) at org.apache.maven.cli.compat.CompatibleMain.main(CompatibleMain.java:60) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.codehaus.classworlds.Launcher.launchEnhanced(Launcher.java:315) at org.codehaus.classworlds.Launcher.launch(Launcher.java:255) at org.codehaus.classworlds.Launcher.mainWithExitCode(Launcher.java:430) at org.codehaus.classworlds.Launcher.main(Launcher.java:375) Caused by: org.apache.maven.plugin.PluginManagerException: Plugin 'org.apache.maven.plugins:maven-archetype-plugin:2.0-alpha-4' has an invalid descriptor: 1) Plugin's descriptor contains the wrong group ID: net.kindleit 2) Plugin's descriptor contains the wrong artifact ID: maven-gae-plugin 3) Plugin's descriptor contains the wrong version: 0.5.9 at org.apache.maven.plugin.DefaultPluginManager.addPlugin(DefaultPluginManager.java:330) at org.apache.maven.plugin.DefaultPluginManager.verifyVersionedPlugin(DefaultPluginManager.java:224) at org.apache.maven.plugin.DefaultPluginManager.verifyPlugin(DefaultPluginManager.java:184) at org.apache.maven.plugin.DefaultPluginManager.loadPluginDescriptor(DefaultPluginManager.java:1642) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.verifyPlugin(DefaultLifecycleExecutor.java:1540) ... 15 more [INFO] ------------------------------------------------------------------------ [INFO] Total time: 1 second [INFO] Finished at: Wed Jun 09 19:45:30 CEST 2010 [INFO] Final Memory: 3M/15M [INFO] ------------------------------------------------------------------------

    Read the article

  • Trouble with ITextSharp - Converting XML to PDF

    - by AllenG
    Okay... I'm trying to use the most recent version of ITextSharp to turn an XML file into a PDF. It isn't working. The documentation on SourceForge doesn't seem to have kept up with the actual releases; the code in the provided example won't even compile under the newest version. Here is my test XML: <Remittance> <RemitHeader> <Payer>BlueCross</Payer> <Provider>Maricopa</Provider> <CheckDate>20100329</CheckDate> <CheckNumber>123456789</CheckNumber> </RemitHeader> <RemitDetail> <NPI>NPI_GOES_HERE</NPI> <Patient>Patient Name</Patient> <PCN>0034567</PCN> <DateOfService>20100315</DateOfService> <TotalCharge>125.57</TotalCharge> <TotalPaid>55.75</TotalPaid> <PatientShare>35</PatientShare> </RemitDetail> </Remittance> And here is the code I'm attempting to use to turn that into a PDF. Document doc = new Document(PageSize.LETTER, 36, 36, 36, 36); iTextSharp.text.pdf.PdfWriter.GetInstance(doc, new StreamWriter(fileOutputPath).BaseStream); doc.Open(); SimpleXMLParser.Parse((ISimpleXMLDocHandler)doc, new StreamReader(fileInputPath).BaseStream); doc.Close(); Now, I was pretty sure the (ISimpleXMLDocHandler)doc piece wasn't going to work, but I can't actually find anything in the source that both a) implements ISimleXMLDocHandler and b) will accept a standard XML document and parse it to PDF. FYI- I did try an older version which would compile using the example code from sourceforge, but it wasn't working either.

    Read the article

  • Rolemanager not working when ASPDotNetStorefront served in a virtual folder with FormsAuthentication

    - by digiguru
    Does anyone know if there is a valid roleManager I can apply to ASPDotNetStorefront to get the site working? We have a website that has an ASP.Net storefront served in a virtual folder off the root. ourwebsite.com ourwebsite.com/shop Everything has been workign fine until we put the groundwork in place for forms authentication in the website recently. This caused an error on the production server when you tried to get into the shop... Unable to cast object of type 'System.Web.Security.RolePrincipal' to type 'AspDotNetStorefrontCore.AspDotNetStorefrontPrincipal'. Looking at the shop's web.config I noticed there was no RoleManager node, so I tried to fix the problem by removing it in the shop's web.config <roleManager enabled="false"> </roleManager> This prevented the error occurring, but also prevented the shopping basket from working. Instead I removed the rolemanager tag from the root website... <!-- Conflicting with AspDotNetStorefront <roleManager enabled="true" defaultProvider="CustomizedRoleProvider"> <providers> <add name="CustomizedRoleProvider" type="System.Web.Security.SqlRoleProvider" connectionStringName="constr" /> </providers> </roleManager> --> This worked, but obviously prevents role authentication working on the root website, which is okay for the next 2 or 3 releases. Anyone know the correct code for the rolemanager in DotNetStorefront?

    Read the article

  • How to HitTest in a WPF Grid Panel to find column index.

    - by John
    Hi. We need to create a "timeline" feature where a user is allowed to drag and draw "time spans" into a WPF grid (we thought this better than a Canvas since it has resizing capabilities). Each span spans multiple columns, but only one row. We utilize the PreviewMouseDown/Up/Move events to see when the user clicks, drags, and releases the mouse. On the "Down" we create a new rectangle and on the "Move" we resize the rectangle, settings its ColumnSpan property to the correct value every time the mouse enteres the next column. At this point the grid has 48 1-star-sized columns. How do we find out which column the user has clicked on? Right now we are unable to find the column using the VisualTreeHelper.HitTest function. We would like to position the rectangle correctly via Grid.SetColumn(rect, X) and then resize it as needed using Grid.SetColumnSpan(rect, Y) Has anyone done something like this before and can help us? Or, is there a better way to draw what we need? Thanks.

    Read the article

  • Version control a content management system?

    - by Mike
    I have the following directory structure in the CMS application we have written: /application /modules /cms /filemanager /block /pages /sitemap /youtube /rss /skin /backend /default /css /js /images /frontend /default /css /js /images Application contains code specific to the current CMS implementation, i.e code for this specific cms. Modules contain reusable portions of code that we share across projects, such as libraries to work with youtube or rss feeds. We include these as git submodules, so that we can update the module in any website and push the changes back across all other projects. It makes it really easy to apply a change to our code and distribute it. We wanted to turn the CMS into a module so we get the same benefit - we can run the entire project under source control, then update the cms as required through a git-submodule. We have run into a problem however: the cms requires javascript/images/css in order for it to work correctly. Things we have thought about: We could create 2 submodules, one for cms-skin and one for cms, but this means you cannot "git pull" one version without having some idea of which versions of skin work with which versions of cms. i.e version 1.2.2 CMS might have issues with 1.0.3 CMS-Skin We could add the skin to the cms module but this has the following problems: Skin should be available on the document root, module code shouldn't be, and if it is it should probably be secured via .htaccess It doesn't seem to make any sense bundling assets with php code We could create a symlink between /skin/backend/ to go to /modules/cms/skin but does this cause any security problems, and do we want to require something like a symlink for the application to work? We could create a hook for git or a shell script that copies files from modules/cms/skin to skin/backend when an update occurs, but this means we lose the ability to edit CMS core files in a project then push them back How is this typically done in large scale cms's? How is it possible to get the source code for a cms under version control, work on the application for a client, then update the sourcecode as releases and given by the vendor? How do applications like Magento or Drupal do this?

    Read the article

  • Eclipse: Cannot install CDT because of a conflicting dependency

    - by MarkZar
    my eclipse has been configured for Java and pydev, now i want to configure C/C++ development tools with Eclipse. i dont want to download the whole Eclipse IDE for C/C++ Developers, for it is not convenient. so i decided to install CDT in my Eclipse. Help == Install New Software, then input http://download.eclipse.org/tools/cdt/releases/indigo, waited for a while, and chose the following CDT Main Features, CDT Optional Features, and Next, then an error occurred. Cannot complete the install because of a conflicting dependency. Software being installed: C/C++ DSF GDB Debugger Integration 4.0.0.201106081058 (org.eclipse.cdt.gnu.dsf.feature.group 4.0.0.201106081058) Software being installed: C/C++ Development Tools SDK 8.0.2.201202111925 (org.eclipse.cdt.sdk.feature.group 8.0.2.201202111925) Only one of the following can be installed at once: GDB DSF Debugger Integration Core 4.0.0.201106081058 (org.eclipse.cdt.dsf.gdb 4.0.0.201106081058) GDB DSF Debugger Integration Core 4.0.2.201202111925 (org.eclipse.cdt.dsf.gdb 4.0.2.201202111925) GDB DSF Debugger Integration Core 4.0.1.201109151620 (org.eclipse.cdt.dsf.gdb 4.0.1.201109151620) Cannot satisfy dependency: From: C/C++ Development Tools 8.0.2.201202111925 (org.eclipse.cdt.feature.group 8.0.2.201202111925) To: org.eclipse.cdt.gnu.dsf.feature.group [4.0.1.201202111925] Cannot satisfy dependency: From: C/C++ DSF GDB Debugger Integration 4.0.0.201106081058 (org.eclipse.cdt.gnu.dsf.feature.group 4.0.0.201106081058) To: org.eclipse.cdt.dsf.gdb [4.0.0.201106081058] Cannot satisfy dependency: From: C/C++ DSF GDB Debugger Integration 4.0.1.201202111925 (org.eclipse.cdt.gnu.dsf.feature.group 4.0.1.201202111925) To: org.eclipse.cdt.dsf.gdb [4.0.2.201202111925] Cannot satisfy dependency: From: C/C++ Development Tools SDK 8.0.2.201202111925 (org.eclipse.cdt.sdk.feature.group 8.0.2.201202111925) To: org.eclipse.cdt.feature.group [8.0.2.201202111925] i have googled for a lot of time, but still cannot find a valid solution. can anyone give a hand to me? thanks a lot!

    Read the article

  • Migrating complex SVN branch hierarchy to Mercurial

    - by Christian Hang
    Our team has been using SVN for managing an application of decent size and over time a rather complex hierarchy of branches and tags has built up, which is following the basic standard layout for SVN repositories, but is more nested: |-trunk |-branches | |-releases | | |-releaseA | | `-releaseB | `-features | |-featureX | `-featureY |-tags |-releaseA | |-beta | `-RTP `-releaseB |-beta `-RTP (The feature branches are obviously temporary branches but we have to take them into consideration as it won't be feasible to close all of them at once in the near future) For several reasons but primarily because merges have been becoming an increasing pain, we are considering to switch to Mercurial. The main problem we are currently facing is migrating the existing code base without losing our history. I've tried several migration tools (e.g., yasvn2hg, hg convert and svn2hg) with yasvn2hg being the most promising, but none of them seem to be able to deal with nested hierarchies but they all assume that branches and tags are organized in one flat directory respectively. The choice between named branches or clones as the conversion target of old SVN branches is not a limiting factor in this case, as either solution would be appreciated. We are currently experimenting with both options and how they would fit into our current processes but haven't decided on one yet. I'd obviously be interested in recommendations or experiences with similar setups concerning that issue as well. So, what is the best way to convert a nested SVN branch hierarchy like this to Mercurial? Converting one branch at a time into a separate repository would be quite annoying and I am not sure if that would be the right approach in the first place, depending on how the tools handle historic merges and need to be aware of all other branches?

    Read the article

  • Maven deploy:deploy-file not found due to version/timestamp appended to jar

    - by JamesC
    I'm having a problem using deploy:deploy-file with snapshots I'd like some advice on please. I have 2 projects; 1) Ant based and 2) the other Maven based that consumes the jars of the other project via Archiva. I've added a target to the Ant project to deploy snapshots on every successful build during our iteration. The problem is the Maven project cannot find them because the name of the dependency has a timestamp appended like so: someJar-1.0-20100407.171211-1.jar Here is the Ant target: <exec executable="${maven.bin}" dir="../lib"> <arg value="deploy:deploy-file" /> <arg value="-DgroupId=com.my.package" /><arg value="-DartifactId=${ant.project.name}" /> <arg value="-Dversion=${manifest.implementation.version}-SNAPSHOT" /> <arg value="-Dpackaging=jar" /> <arg value="-Dfile=../lib/${ant.project.name}-${manifest.implementation.version}-SNAPSHOT.jar" /> <arg value="-Durl=http://archiva.xxx.com/archiva/repository/snapshots" /> <arg value="-DrepositoryId=snapshots" /> </exec> I have a similar Ant target for releases and this works fine. Other pure Maven projects which deploy snapshosts via mvn deploy work fine. Does anyone know where I am going wrong? Thank You Update Figured out the answer, see below.

    Read the article

  • easy_install ReviewBoard [Errno 104] Connection reset by peer

    - by blastthisinferno
    I have a Kubuntu 10.04 VM image and am trying to install ReviewBoard by following The Linux Installation Wiki. When I get to the step to easy_install ReviewBoard, I encounter a problem I cannot find a solution to. Below is the console output: sudo easy_install ReviewBoard Searching for ReviewBoard Best match: ReviewBoard 1.0.8 Processing ReviewBoard-1.0.8-py2.6.egg ReviewBoard 1.0.8 is already the active version in easy-install.pth Installing rb-site script to /usr/local/bin Using /usr/local/lib/python2.6/dist-packages/ReviewBoard-1.0.8-py2.6.egg Processing dependencies for ReviewBoard Searching for pytz Reading http://downloads.reviewboard.org/mirror/ Download error: [Errno 104] Connection reset by peer -- Some packages may not be found! Reading http://downloads.reviewboard.org/releases/ReviewBoard/1.0/ Download error: [Errno 104] Connection reset by peer -- Some packages may not be found! Reading http://pypi.python.org/simple/pytz/ Download error: [Errno 104] Connection reset by peer -- Some packages may not be found! Reading http://pypi.python.org/simple/pytz/ Download error: [Errno 104] Connection reset by peer -- Some packages may not be found! Couldn't find index page for 'pytz' (maybe misspelled?) Scanning index of all packages (this may take a while) Reading http://pypi.python.org/simple/ Download error: [Errno 104] Connection reset by peer -- Some packages may not be found! No local packages or download links found for pytz error: Could not find suitable distribution for Requirement.parse('pytz') I am new to python, but it seems like easy_install cannot decide on a version of pytz. I have read http://stackoverflow.com/questions/383738/104-connection-reset-by-peer-socket-error-or-when-does-closing-a-socket-resul http://homepage.mac.com/s_lott/iblog/architecture/C551260341/E20081031204203/index.html and it seems like the problem described in those articles has more to do with development than my problem, but I could be wrong. Has anyone encountered a problem like this? If there is any missing information that would help troubleshoot this, please let me know.

    Read the article

  • Save UIwebview contents to photo gallery

    - by user307410
    There's a video tutorial on u tube that shows how to perform this.It consists of a UIwebview and toolbar button to save the contents.Haven't had any luck making this work.Could someone have a look and see they can make it work.Many thanks in advance. http://www.youtube.com/watch?v=gDPca3JIc_s&feature=player_embedded# /////////////////////////////////////////////////////////////////// // // SaveWebViewController.h // SaveWeb // // // Copyright MyCompanyName 2010. All rights reserved. // import @interface SaveWebViewController : UIViewController { IBOutlet UIWebView *webview; } @property (nonatomic, retain) IBOutlet UIWebView *webview; [IBAction]saveWeb:(id)sender; @end //////////////////////////////////////////////////////////////////////////////// // // SaveWebViewController.m // SaveWeb // // // Copyright MyCompanyName 2010. All rights reserved. // import "SaveWebViewController.h" @implementation SaveWebViewController (IBAction)saveWeb:(id)sender { UIGraphicsBeginImageContext(webView.frame.size); [self.view.layer renderInContext: UIGraphicsGetCurrentContext()]; UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); UIImageWriteToSavedPhotosAlbum(viewImage, nil, nil, nil); } // The designated initializer. Override to perform setup that is required before the view is loaded. - (id)initWithNibName:(NSString *)nibNameOrNil bundle:(NSBundle *)nibBundleOrNil { if (self = [super initWithNibName:nibNameOrNil bundle:nibBundleOrNil]) { // Custom initialization } return self; } // Implement loadView to create a view hierarchy programmatically, without using a nib. - (void)loadView { } //Implement viewDidLoad to do additional setup after loading the view, typically from a nib. - (void)viewDidLoad { [super viewDidLoad]; [webView loadRequest:[NSURLRequest requestWithURL:[NSURL URLWithString:@"http://google.com"]]]; } // Override to allow orientations other than the default portrait orientation. - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { // Return YES for supported orientations return (interfaceOrientation == UIInterfaceOrientationPortrait); } (void)didReceiveMemoryWarning { // Releases the view if it doesn't have a superview. [super didReceiveMemoryWarning]; // Release any cached data, images, etc that aren't in use. } (void)viewDidUnload { // Release any retained subviews of the main view. // e.g. self.myOutlet = nil; } (void)dealloc { [super dealloc]; } @end

    Read the article

  • array retain question

    - by Cosizzle
    Hello, im fairly new to objective-c, most of it is clear however when it comes to memory managment I fall a little short. Currently what my application does is during a NSURLConnection when the method -(void)connectionDidFinishLoading:(NSURLConnection *)connection is called upon I enter a method to parse some data, put it into an array, and return that array. However I'm not sure if this is the best way to do so since I don't release the array from memory within the custom method (method1, see the attached code) Below is a small script to better show what im doing .h file #import <UIKit/UIKit.h> @interface memoryRetainTestViewController : UIViewController { NSArray *mainArray; } @property (nonatomic, retain) NSArray *mainArray; @end .m file #import "memoryRetainTestViewController.h" @implementation memoryRetainTestViewController @synthesize mainArray; // this would be the parsing method -(NSArray*)method1 { // ???: by not release this, is that bad. Or does it get released with mainArray NSArray *newArray = [[NSArray alloc] init]; newArray = [NSArray arrayWithObjects:@"apple",@"orange", @"grapes", "peach", nil]; return newArray; } // this method is actually // -(void)connectionDidFinishLoading:(NSURLConnection *)connection -(void)method2 { mainArray = [self method1]; } // Implement viewDidLoad to do additional setup after loading the view, typically from a nib. - (void)viewDidLoad { [super viewDidLoad]; } - (void)didReceiveMemoryWarning { // Releases the view if it doesn't have a superview. [super didReceiveMemoryWarning]; // Release any cached data, images, etc that aren't in use. } - (void)viewDidUnload { mainArray = nil; // Release any retained subviews of the main view. // e.g. self.myOutlet = nil; } - (void)dealloc { [mainArray release]; [super dealloc]; } @end

    Read the article

  • Determine whether .NET assemblies were built from the same source

    - by Clayton
    Does anyone know of a way to compare two .NET assemblies to determine whether they were built from the "same" source files? I am aware that there are some differencing utilities available, such as the plugin for Reflector, but I am not interested in viewing differences in a GUI, I just want an automated way to compare a collection of binaries to see whether they were built from the same (or equivalent) source files. I understand that multiple different source files could produce the same IL, and realise that the process would only be sensitive to differences in the IL, not the original source. The main obstacle to just comparing the byte streams for the two assemblies is that .NET includes a field called "MVID" (Module Version Identifier) the assembly. This appears to have a different value for every compilation, so if you build the same code twice the assembly will be different. A related question is, does anyone know how to force the MVID to be the same for each compilation? This would avoid us needing to have a comparison process that is insensitive to differences in the value of the MVID. A consistent MVID would be preferable, as this means that standard checksums could be used. The background behind this is that a third-party company is responsible for independently reviewing and signing off our releases, prior to us being permitted to release to Production. This includes reviewing the source code. They want to independently confirm that the source code we give them matches the binaries that we earlier built, tested and currently plan to deploy. We are looking for a process that allows them to independently build the system from the source we supply them with, and the compare the checksums against the checksums for the binaries we have tested. thanks

    Read the article

  • UIAlertViewDelegate method didDismissWithButtonIndex gets called while the phone is sleeping/locked.

    - by Rob
    I have a UIAlertView who's didDismissWithButtonIndex delegate method calls pops the view controller (same class, it's the alertview delegate and the viewcontroller) to return the user to the previous screen. The issue is that when you lock the phone before the [alert show]; is called, something is calling didDismissWithButtonIndex while the phone is locked. Since the response to that is to pop the view controller, which releases and deallocs it, I crash on the callback. What is causing this phantom button press? Seems like a framework bug, but I hate jumping to that conclusion. I'm definitely not hitting the button, because I hit a breakpoint in my code right before it's displayed. Then I lock the phone. Then I continue. I see it do the show, return to the event loop, and then, while the phone is still locked, hit my breakpoint in didDismissWithButtonIndex. There are a few internet/forum postings about similar spurious delegate calls, but no concrete answers. This is on the simulator, and the device, both OS 2.2 and OS 3.0. I'm assuming I'm missing something, but what? Update: Yeah, I created a simple project with just two view controllers, where when the 2nd view controller displays it creates the alert, and shows it. Then I NSLog in the delegate method, and when the phone is locked, it fires once while locked, and then again when it's unlocked and the button is clicked...2 log messages. But when not locked, there's only one. I guess I'll open an issue, but it seems awfully obvious to have survived this long without anyone complaining. :-) I'm going to try and work around it by making an isActive flag value when the willResignActive/didBecomeActive notifications arrive, and if the app isn't active skipping the delegate body. Update I went ahead in July after I posted this and created radar 7097363 for this issue. There's been no response. The workaround in practice works quite well, checking the active status when processing the delegate, and skipping the action if the the app is inactive.

    Read the article

  • Unable to install Management Studio Express 2008 due to Visual Studio 2008 installation

    - by Jamie
    I am attempting to install Managment Studio Express 2008 on a Win7 system that already has Visual Studio 2008 installed. I have installed VS 2008 SP1, which results in the About box for Visual Studio giving a version of 9.0.30792.1 SP. When I attempt the Management Studio installation I see a message that tells me that I must install SP1. However, this is already installed. After a fair bit of searching I came across the link below, where someone had commented with a command line option for the installation executable that forced it to skip the check. However, this simply pushed the error to later in the installation process. http://www.thushanfernando.com/index.php/2008/08/10/fix-rule-previous-releases-of-microsoft-visual-studio-2008-failed/. This seems to be a perennial problem for Microsoft. I can't remember a time when I've painlessly installed both Management Studio and Visual Studio. I can't imagine that this is an unusual combination, after all! Anyone had any success in solving this problem before I take on the day-long task of uninstalling and reinstalling Visual Studio and all its associated bits?

    Read the article

  • Books and resources for Java Performance tuning - when working with databases, huge lists

    - by Arvind
    Hi All, I am relatively new to working on huge applications in Java. I am working on a Java web service which is pretty heavily used by various clients. The service basically queries the database (hibernate) and then works with a lot of Lists (there are adapters to convert list returned from DB to the interface which the service publishes) and I am seeing lot of issues with the service like high CPU usage or high heap space. While I can troubleshoot the performance issues using a profiler, I want to actually learn about what all I need to take care when I actually write code. Like what kind of List to use or things like using StringBuilder instead of String, etc... Is there any book or blogs which I can refer which will help me while I write new services? Also my application is multithreaded - each service call from a client is a new thread, and I want to know some best practices around that area as well. I did search the web but I found many tips which are not relevant in the latest Java 6 releases, so wanted to know what kind of resources would help a developer starting out now on Java for heavily used applications. Arvind

    Read the article

  • StarTeam trunk.

    - by Nix
    I have the unfortunate opportunity of source control via Borland's StarTeam. It unfortunately does very few things well, and one supreme weakness is its view management. I love SVN and come from an SVN mindset. Our issue is post production release we are spending countless hours merging changes into a "production support" environment. Please do not harass me this was not my doing, I inherited it and am trying to present a better way of managing the repository. It is not an option to switch to a different SCM tool. Current setup Product.1.0 (TRUNK, current production code, and at this level are pending bug fixes) Product.2.0(true trunk anything checked in gets tested, and then released next production cycle, a lot of changes occur in this view) My proposal is going to be to swap them, have all development be done on the trunk (Production), tag on releases, and as needed create child views to represent production support bug fixes. Production Production.2.0.SP.1 I can not find any documentation to support the above proposal so I am trying to get feedback on whether or not the change is a good idea and if there is anything you would recommend doing differently.

    Read the article

  • SVN Question regarding branching and third party vendor branching

    - by fritzone
    Hi, We are developing an application which consists of: a source code base given to us by a partner infrequently. This is a somewhat working code, "final" version of something. They have their own release cycle and version tracking. on the code base above we make our changes. These can be either bugfixes or development of new features. Till now, we managed to create some code mayhem, as a result we would like to put all this in a SVN repository. I would like to ask you what you think is the best practice for this to happen with the less pain. The followings are our things that we consider important: We would like to track our bugfixes/changes since we cannot send back bugfixes to our software vendor, but we can report a bug (and they might or might not fix it). All we develop on their code remains "in-house" they are not interested in our changes. As long as we don't get a new codebase from the vendor, we consider their latest version to be the stable one we are working on. This might be branched down further, but the result is always a stable trunk, the build is done based on this "stable" trunk. When the vendor releases a new version we would like to merge our "stable" trunk (which contains a lot of changes) with their changes, thus creating a new "stable" trunk. For each version we deploy (to clients) we should be able later to fix bugs only on that version, for clients who have installed our system using that specific version There are more developers working on the codebase... (as usual :) Thanks a lot for the tips.

    Read the article

  • iPad + OpenGL ES2. Why the Puzzling Virtual Memory Spike During Device Reorientation?

    - by dugla
    I've been spending the afternoon starring at Xcode Instruments memory monitor trying to decipher the following memory issue. I have a fullscreen OpenGL ES2 app running on iPad. I am fanatical about memory issues so my retains/releases are all nicely balanced. I closely monitor memory leaks. My app is basically squeeky clean. Except occassionally when I reorient the device. Portrait to Landscape. Back and forth I rock the device stress testing my discarding and rebuilding of the OpenGL framebuffer. The ambient memory footprint of my app is about 70MB Real Mems and 180MB Virtual Mems. Real memory hardly varies at all during device rotations. However the virtual mems reading sometimes briefly spikes up to 250MB and then recedes back to 180MB. No real pattern. But clearly related discarding/rebuilding the framebuffer. I see random memory warnings in my NSlogs but the app just hums along, no worries. 1) Since iPhone OS devices don't have VM could someone explain to me what the VM reading actually means? 2) My app totally leak free and generally bulletproof dispite the VM spikes. Never crashes. Rock solid. Should I be concerned about this? 3) There is clearly something happening in OpenGL framebuffer land that is causing this but I am using the API in the proper way: paraphrasing: Discarding the framebuffer: glDeleteRenderbuffers(1, &m_colorbuffer); glDeleteFramebuffers(1, &m_framebuffer); Rebuilding the framebuffer: glGenFramebuffers(1, &m_framebuffer); glGenRenderbuffers(1, &m_colorbuffer); Is there some other memory flushing trick I have missed? Thanks for any insight. Cheers, Doug

    Read the article

  • Add jar to apache felix in Pom file?

    - by drozzy
    How can I add a jar to my bundle in Apache Felix? I am using maven, with maven-bundle-plugin to manage my bundles in OBR for me. But I am not sure where to declare the dependency inside my POM on the jar, so that maven correctly compiles it into the final bundle. This is how my plugin looks in pom: <plugin> <groupId>org.apache.felix</groupId> <artifactId>maven-bundle-plugin</artifactId> <version>2.1.0</version> <extensions>true</extensions> <configuration> <instructions> <Bundle-Category>sample</Bundle-Category> <Bundle-SymbolicName>${artifactId} </Bundle-SymbolicName> <Export-Package> //blahblah </Export-Package> </instructions> <!-- OBR --> <remoteOBR>repo-rel</remoteOBR> <prefixUrl>file:///C:/Users/blah/Projects/Eclipse3.6-RCP-64/Felix/obr-repo/releases</prefixUrl> <ignoreLock>true</ignoreLock> </configuration>

    Read the article

< Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >