Search Results

Search found 20631 results on 826 pages for 'release management'.

Page 316/826 | < Previous Page | 312 313 314 315 316 317 318 319 320 321 322 323  | Next Page >

  • PowerConnect 3548p SNTP and web interface not working

    - by Force Flow
    I have been unable to get SNTP and access to the web interface working properly on a Dell PowerConnect 3548p. In the logs, this message appears over and over again: 04-Jan-2000 20:19:29 :%MNGINF-W-ACL: Management ACL drop packet received on interface Vlan 172 from 172.17.0.3 to 172.18.0.10 protocol 17 service Snmp 172 is the management vlan. 172.17.0.3 is the DNS server 172.18.0.10 is the switch's IP address. The DNS server and the switch are located on different subnets and separated by routers. I am unable to access the web interface of the switch from the 172.17.x.x subnet. I can only access the web interface of the switch if I am accessing it from the 172.18.x.x subnet. There is also a managed linksys switch on the 172.18.x.x subnet on the 172 vlan, which has no problem with SNTP. I can also access it from the 172.17.x.x network. So, it stands to reason that this is not a firewall or routing issue, but with the 3548p switch. I suspect the issue is with management permissions/ACLs on the 3348p switch, but that's about as much as I've been able to determine so far. Any ideas?

    Read the article

  • Host name change breaking http? Fedora

    - by Dave
    OK so I have been messing around on my development server. It has been a while since I have had my head in linux and I suspect I have broken something. I have SSH running and that is working fine. I also have HTTP and I had FTP running also. Earlier today I decided I wanted to rename the machine so I updated the /etc/hosts file and /etc/sysconfig/network. I also changed the server name in the httpd.conf. I rebooted the machine and reconnected to SSH fine. Later I was messing around with the FTP service (trying to tighten up the user security) and when i tried to connect remotely to FTP no joy, it said cannot connect. I thought that was weird but had planned to remove ftp as we will be using github so removed ftp and moved on. Then I tried to connect to the website but major fail. even connecting to the IP address is failing. I used lynx to connect to the localhost and there was my site so something going on at server level. I thought maybe something up with iptables but I have not changed them but tried adding http but still no joy. I have a - Fedora release 17 (Beefy Miracle) NAME=Fedora VERSION="17 (Beefy Miracle)" ID=fedora VERSION_ID=17 PRETTY_NAME="Fedora 17 (Beefy Miracle)" ANSI_COLOR="0;34" CPE_NAME="cpe:/o:fedoraproject:fedora:17" Fedora release 17 (Beefy Miracle) Fedora release 17 (Beefy Miracle) Linux version 3.3.4-5.fc17.x86_64 ([email protected]) (gcc version 4.7.0 20120504 (Red Hat 4.7.0-4) (GCC) ) #1 SMP Mon May 7 17:29:34 UTC 2012 This is my iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT icmp -- anywhere anywhere ACCEPT all -- anywhere anywhere ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh REJECT all -- anywhere anywhere reject-with icmp-host-prohibited Chain FORWARD (policy ACCEPT) target prot opt source destination REJECT all -- anywhere anywhere reject-with icmp-host-prohibited Chain OUTPUT (policy ACCEPT) target prot opt source destination Like I say I can use SSH no issue but http although running is a no go from a remote computer. Any ideas?

    Read the article

  • What characters are illegal in Cisco IOS username secret passwords?

    - by Alain O'Dea
    I am using username secret to add users with encrypted passwords to our switches and firewall. I have been battling with the same switches and firewall for a couple of hours trying to get securely generated hard passwords for all admins. Sometimes, the passwords would go into config, but wouldn't work for login. According to the documentation for enable secret a password must not begin with a number and ? has to be entered as Ctrl-V then ? to escape it. I followed that and still got passwords I could not use sometimes. There was no error when I ran username, but the password would be rejected on login by some, but not all of the switches. They are all WS-C2960-48PST-L. The passwords it didn't like contained back ticks "`" (that character under tilde ~ under Esc). The "misbehaving" switches are running: Cisco IOS Software, C2960 Software (C2960-LANBASEK9-M), Version 12.2(50)SE5, RELEASE SOFTWARE (fc1) The "working" switches are running: Cisco IOS Software, C2960 Software (C2960-LANBASEK9-M), Version 12.2(46)SE, RELEASE SOFTWARE (fc2). The "misbehaving" switches are running a newer IOS, so this suggests a regression introduced somewhere between 12.2(46)SE and 12.2(50)SE5. I was unable to find any evidence of this being intentional in the release notes for 12.2(50)SE. I would like to avoid this next time the passwords are changed :) What characters are illegal in Cisco IOS username secret passwords? Thank you for your help :)

    Read the article

  • Node.js Build failed: -> task failed (error#2)?

    - by Richard Hedges
    I'm trying to install Node.js on my CentOS server. I run ./configure and it runs perfectly fine. I then run the 'make' command and it produces the following: [5/38] libv8.a: deps/v8/SConstruct - out/Release/libv8.a /usr/local/bin/python "/root/node/tools/scons/scons.py" -j 1 -C "/root/node/out/Release/" -Y "/root/node/deps/v8" visibility=default mode=release arch=ia32 toolchain=gcc library=static snapshot=on scons: Reading SConscript files ... ImportError: No module named bz2: File "/root/node/deps/v8/SConstruct", line 37: import js2c, utils File "/root/node/deps/v8/tools/js2c.py", line 36: import bz2 Waf: Leaving directory `/root/node/out' Build failed: - task failed (err #2): {task: libv8.a SConstruct - libv8.a} make: * [program] Error 1 I've done some searching on Google but I can't seem to find anything to help. Most of what I've found is for Cygwin anyway, and I'm on CentOS 4.9. Like I said, the ./configure went through perfectly fine with no errors, so there's nothing there that I can see. EDIT I've got a little further. Now I just need to upgrade G++ to version 4 (or higher). I tried yum update gcc but no luck, so I tried yum install gcc44, which resulted in no luck either. Has anyone got any ideas as to how I can update G++?

    Read the article

  • Queries passed to SQL Server are getting corrupted

    - by adrianbanks
    We are experiencing a bizarre error with our application at a customer site. We have managed to narrow it down to the point where we can replicate the behaviour using just Management Studio and SQL Server. We have two machines, A and B: +------------+ +--------------------+ | [A] | | [B] | | Management | -------------- | SQL Server 2008 R2 | | Studio | | Enterprise x64 | +------------+ +--------------------+ We are running a SQL script in Management Studio on machine A against the SQL Server instance on machine B. We are not actually executing the script, just parsing it. Most of the time, the parse operation works fine. Occasionally (seemingly randomly), the parse operation fails with a syntax error. The error message shows the part of the script with the error, which appears as some SQL from the original script that has been truncated and has random characters appended to it. An example: The original SQL: SELECT DISTINCT ST.TABLE_NAME as TableName FROM INFORMATION_SCHEMA.TABLES AS ST INNER JOIN INFORMATION_SCHEMA.COLUMNS AS SC ON SC.TABLE_NAME = ST.TABLE_NAME WHERE ST.TABLE_TYPE = 'BASE TABLE' AND SC.COLUMN_NAME = 'Identity' AND ST.TABLE_NAME != 'dtproperties' ORDER BY ST.TABLE_NAME The SQL that is in error (as reported by SQL Server): SELECT DISTINCT ST.TABLE_NAME as TableName FROM INFORMATION_SCHEMA.TABLES AS ST INNER JOIN INFORMATION_SCHEMA.COLUMNS AS SC ON SC.TABLE_NAME = Sa? The above example shows how the query is being corrupted. It doesn't always happen, and is not always the same bit of SQL that causes the error. Parsing this script against another SQL Server instance produces no errors, showing that the script is fine. It appears that something is corrupting the SQL that is being received the the server. This leads me to think that the problem lies either with the client end or in the transmission of the SQL from the client to the server. I have a SQL trace from the period where an error occurs, which shows the SQL has been corrupted when SQL Server receives it. We have been unable to track down any possible cause of this behaviour, and so cannot find a fix. Because the errors occur seemingly randomly, it is also very hard to generate reproduction steps to submit a bug report. Any ideas?

    Read the article

  • Install Exchange 2013 with DSC

    - by Alain Laventure
    I tried to install Exchange 2013 with the resource windowsProcess in existing Exchange Configuration. All prerequisites are installed (the Exchange Organization still exists). This is my Resource section: WindowsProcess Exchange2013 { Credential=$credential Path= "C:\Sources\Cumulative Update 5 for Exchange Server 2013 (KB2936880)\Setup.exe" Arguments= "/mode:Install /role:Mailbox /IAcceptExchangeServerLicenseTerms /TargetDir:C:\EX2013" Ensure= "Present" } #End Filter } #End Node } # End configuration /* @TargetNode='TargetDSC02' @GeneratedBy=exadmin @GenerationDate=08/02/2014 08:16:03 @GenerationHost=SOURCEDSC02 */ instance of MSFT_Credential as $MSFT_Credential1ref { Password = "Password1"; UserName = "S05\\Exadmin"; }; Exadmin is a member of Orgaganization Management Group and it is also member of Domain Admin Group, to be able to install Exchange When I execute this resource , Exchange Installation Start but after 1 minute the installation stops with this error: Failed [Rule:GlobalServerInstall] [Message:You must be a member of the 'Organization Management' role group or a member of the 'Enterprise Admins' group to continue.] To be sure that the right is really the problem I create a special User with only Administrator right of the Exchange server and with no Exchange Permission I run manually on the new Exchange server .\Setup.exe /mode:Install /role:Mailbox /IAcceptExchangeServerLicenseTerms /Targetdir:C:\EX2013 And I got the Same error that with DSC. After I add my test user in the Organization Management Group and I run again manually .\Setup.exe /mode:Install /role:Mailbox /IAcceptExchangeServerLicenseTerms /Targetdir:C:\EX2013 And the Exchange 2013 installation finish without any error. That prove that the problem with DSC is Permission right.

    Read the article

  • Log shipping on select tables.

    - by Scott Chamberlain
    I know I am most likely using incorrect terminology so please correct me if I use the wrong terms so I can search better. We have a very large database at a client's site and we would like to have up to date copies of some of the tables sent across the internet to our servers at our office. We would like to only copy a few of the tables because the bandwidth requirement to do log shipping of the entire database (our current solution) is too high. Also replication directly to our servers is out of the question as our servers are not accessible from the internet and management does not want to do replication (more on that later). One possible Idea we had is to do some form of replication on the tables we need to another database on the same server and do log shipping of that second smaller database but management is concerned that the clients have broken replication (it was between two servers on their internal network however) on us in the past and would like to stay away from it if possible. Any recommendations would be greatly appreciated. If using some form of replication is the only solution, I am not against replication, I just need compelling arguments to convince management to do it. This is to be set up on multiple sites that are running either Sql2005 or Sql2008 we will have both versions on our end to restore the data to so that is not a issue. Thank you.

    Read the article

  • TEMP environment variable occasionally set incorrectly

    - by Roger Lipscombe
    Occasionally, I find my TEMP and TMP environment variables set to C:\Windows\TEMP. They should be set to %USERPROFILE%\AppData\Local\Temp, and are configured correctly in System Properties. This manifests itself as error messages like the following: ---> System.InvalidOperationException: Unable to generate a temporary class (result=1). error CS2001: Source file 'C:\Windows\TEMP\gb_pz65v.0.cs' could not be found error CS2008: No inputs specified ...which occurs in various .NET applications (in particular Visual Studio 2010 or SQL Server Management Studio). Alternatively, SQL Server Management Studio will report: Value cannot be null. Parameter name: viewInfo (Microsoft.SqlServer.Management.SqlStudio.Explorer) If I run PowerShell elevated, then $env:TEMP is set correctly. If I run PowerShell non-elevated, then it's not. I believe that it should be set correctly in both cases. If not, it's the wrong way round. The same is true for CMD.EXE. Rebooting fixes it, temporarily, until something breaks it again. Presumably something loaded into Explorer.exe is messing with its environment variables, but what? The values in the registry are correct, even while this is happening: HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Environment has TEMP = %SYSTEMROOT%\Temp HKCU\Environment has TEMP = %USERPROFILE%\AppData\Local\Temp By setting a breakpoint on shell32!RegenerateUserEnvironment, I'm able to trap it when it happens, but I still don't know why explorer.exe is reading the wrong environment variables. I can reproduce it consistently by broadcasting a WM_SETTINGCHANGE message (I wrote a one-line C++ program to do this). Watching the activity in Process Monitor shows that explorer.exe doesn't even look at HKCU\Environment. What is going on?

    Read the article

  • What could be wrong with my VLAN?

    - by Matt
    I've got a VLAN 10 setup as a management VLAN. The management VLAN comes off port 48 and links to another set of switches that do not support VLAN's so it was I believe set up as an untagged access port. In the past this was a different brand of switch and this worked fine. However, since changing to the HP V1910-48G series I can't seem to get this working. I must point out that as far as I'm aware it is wired up properly (I can't check this physically as I'm working remote and have asked the tech who's got access to double check for me). Now I don't have a huge amount of experience with VLAN environments but AFAIK this is right. I've set the port 48 (linked to the management switches) as an untagged port with PVID 10 and access link type. Is this all I'd need to do from a configuration perspective to ensure all devices connected to port 48 would end up being on VLAN 10 and not needing to tag their frames. i.e. the tag would be added by the switch before being forwarded.

    Read the article

  • rpmbuild gives seg fault

    - by Deepti Jain
    I am trying to build an rpm using the rpmbuild tool. I have source code which build binaries around 30 GB. This software for which I am making the rpm has dozens of executables. When I copy only the binaries of a single executable (Eg. init) my rpm builds successfully. But when I dump the entire build to the rpm, rpmbuild does everything but gives a seg fault in the end. Here is my spec file: # This is a sample spec file for wget %define _topdir /root/mywget %define name source %define release 1 %define version 1.12 %define _builddir /root/mywget/BUILD/glenlivet %define _buildrootdir /root/mywget/BUILDROOT %define _buildroot /root/mywget/BUILDROOT %define _sourcedir /root/mywget/SOURCES BuildRoot: %{_buildroot} Summary: GNU source License: GPL Name: %{name} Version: %{version} Release: %{release} Source: %{name}-%{version}.tar.gz Prefix: /usr Group: Development/Tools %description The GNU sample program downloads files from the Internet using the command-line. %prep %setup -q -n glenlivet %build cd %{_builddir} make all %install rm -rf %{_buildrootdir} mkdir -p %{_buildrootdir}/bin cp -p -r %{_builddir}/build/obj-x64/* %{_buildrootdir}/bin/ %files %defattr(-,root,root) /bin/* If I only copy some of the binaries (let say one utility and its dependent binaries) it works fine. But when I try to copy the entire build, I get a seg fault. I get the seg fault after rpmbuild has executed these sections: %prep %build %install rpmbuild also processes my source file. Processing files: source-1.12-1 Finding Provides: Finding Requires: Finding Supplements: Provides:...... Requires:...... Checking for unpackaged file(s):/ usr/lib/rpm/check-files /root/mywget/BUILDROOT Checking for unpackaged file(s):/ usr/lib/rpm/check-files /root/mywget/BUILDROOT Segmentation fault Any clue what wrong is going on or where does rpmbuild fails? Thanks in advance

    Read the article

  • Un-do Windows disk convert HFS+

    - by BLAKE
    Last night, a friend asked my to give him a copy of a word document. He handed me an external hard drive and left. I plugged the hard drive into my file server running Windows Server 2003, opened disk management and clicked OK. (I know that in Windows 2003 you need to manually assign a drive letter to external drives.) I then looked at the drive in disk management and it said that it was unallocated space. I called my friend and he said that there was data on the drive, but he used it with his Mac Book. Aperantly when I clicked OK in disk management I converted the from HFS+ file system to something else. Is there any way to undo the disk convert? I immediately removed the drive, so there was no writing to it. Windows did not format the drive, it just converted it. Is the data still there? All the data recovery programs I have are for windows, can they read the Mac file system? I need to get the data back, what can I do?

    Read the article

  • Volume is no longer showing in Raid Controller BIOS and in Windows

    - by Gordon
    Hi all, I have installed some critical Windows Updates yesterday and now my external RAID Volume no longer shows in Windows Vista x64. All updates went through successfully. From their description, I cannot see how they should relate to the issue, but this is the only change that happened, so who knows. Anyway, here is the details: I have an external eSata enclosure that is running on a SiI4726 controller. I can connect to the controller with it's management utility from the computer the enclosure is connected to. The three drives in the enclosure show up as JBODs. I had those drives configured to be one logical RAID5 drive. RAID management is done through a SiI3132 SoftRaid controller. The Raid Management Utility just shows empty channels where it usually shows the Raid Group. In the Windows Disk Manager, I can see an unknown unitialized device. This is fine according to the setup manual. What it doesn't show is my Raid drive. It's gone. Also, when booting Windows, the BIOS of the controller used to show the RAID volume before booting the OS. This is not happening anymore. Updating drivers and firmware did not help. I have made sure the drivers and firmware are compatible to each others. And like I said, it used to work before. Any clues?

    Read the article

  • Good support to multiple desktops AND multiple monitors in Linux (Ubuntu)?

    - by Somebody still uses you MS-DOS
    I'm starting to have A LOT of opened windows in my machine. Sometimes within a project, I have e-mail/task management/personal e-mail/twitter, and a lot of different opened applications/terminal in my Linux environment. Nowadays I have 4 worspaces: Corporate management (e-mail) and corporate messenger; Work (Documents, Requisites) Dev (Development, All gVim windows, terminal and Firefox for development) Personal (Personal stuff: personal e-mail, delicious, twitter and so on) Sometimes it would be interesting to have different workspaces to projects instead of this configuration I have nowadays that are classes of work (bad name, I know, but I think you got the idea). I'm starting to think about using two monitors: one with Corporate Management, Work and Personal. The second monitor is only the development state: each workspace here is about a project being worked on instead of groups of works like before. A workspace may be implementing different classes for example. My question is: I just want to change to a second monitor using the mouse. I want to still be able to change workspaces in the same monitor using keyboard shortcuts. The keyboard shortcuts wouldn't change monitors, just worskpaces on the same monitor. Does Linux (Ubuntu 10.04 Lucid Lynx) support this envisioned setup? If so, how?

    Read the article

  • Allied Telesis router: IP filtering for the LOCAL interface

    - by syneticon-dj
    Given an Allied Telesis router with an AlliedWare OS (2.9.1) I would like to disable access to all management services of the router except for a number of subnets (or alternatively have what is a "management VLAN" with other manufacturers' switch and router models). What I have tried so far: creating a new VLAN and an appropriate IP interface, setting the LOCAL IP into this subnet, creating an IP filter for the IP interface and specifying my exclusion subnets: it simply does not work as intended as I can access the LOCAL IP set from any of the other VLAN interfaces - the traffic is apparently not going through my defined filter set at all creating a new IP filter set and binding it to the LOCAL IP interface: this seems not to affect any kind of traffic at all, the counters for the filter set remain at zero packets setting the Remote Security Officer Level IP address range: this only restricts the ability for a user with the Security Officer privilege level to log in from any but the specified address ranges / subnets. Unfortunately, it does not prevent service availability (and thus DoS capacity) or the ability to log in as a less privileged user (e.g. a "manager") calling technical support: unfortunately no solution so far What I have not tried: creating a filter set for each and every IP interface defined on the router and excluding access to the router's management IP: I would like to reduce the overhead induced by IP filters as the router already is CPU-constrained at times. Setting up filters for every IP interface would mean that each and every traffic packet would have to pass the filters, thus consuming CPU cycles. If by any means possible, I would like to find a different solution.

    Read the article

  • Un-do Windows disk convert HFS+

    - by BLAKE
    Last night, a friend asked my to give him a copy of a word document. He handed me an external hard drive and left. I plugged the hard drive into my file server running Windows Server 2003, opened disk management and clicked OK. (I know that in Windows 2003 you need to manually assign a drive letter to external drives.) I then looked at the drive in disk management and it said that it was unallocated space. I called my friend and he said that there was data on the drive, but he used it with his Mac Book. Aperantly when I clicked OK in disk management I converted the from HFS+ file system to something else. Is there any way to undo the disk convert? I immediately removed the drive, so there was no writing to it. Windows did not format the drive, it just converted it. Is the data still there? All the data recovery programs I have are for windows, can they read the Mac file system? I need to get the data back, what can I do?

    Read the article

  • How to format my external HDD back to as "removable storage"?

    - by user990106
    Recently I formated my Seagate FreeAgent GoFlex external HDD in Mac OS X using GUID partition table since I wanted to install another Mac OS X onto that external HDD. However I changed my mind after my external HDD being formatted. Now I want to format my external HDD back to NTFS so that I can use it with my Windows 7. However, after I connected my external HDD via USB it didn't show up in my "computer" so I used "Disk Management" to check what's wrong with it. In the "Disk Management" I saw that there was one partition of my external HDD called "EFI partition" and I found that I could not delete this partition in the "Disk Management". So I tried to use "diskpart" in cmd and select the external HDD and commanded "clean". Then the EFI partition was gone and I created new volumn on that external HDD. However, after the volumn being created my external HDD did show up in my "computer" but it is in the "Hard Disk Drive" not in the "Devices with Removable Storage" as it used to be. I'm wondering if I can do anything to it to make it recognized as a "Devices with Removable Storage"?

    Read the article

  • 2008R2 Standard and Hyper-V and Ram Usage (Usable vs Available)

    - by Mark
    A new server was purchased for our development team to start utilizing the full feature set of TFS, namely Lab Management. Because of the need for Lab Management we bought a fairly beefy machine to handle this task and to also act as a build machine. I have been tasked to setup additional features TFS on this machine starting out with a build controller and eventually going towards a full out Lab Management setup using Hyper-V. My question: Upon initially logging I noticed that Windows is registering 64gb but only 32gb available. I know this is a limitation because of licencing since only Standard Edition is installed. Since Hyper-V is another layer that handles the virtualization of guest OS's is Hyper-V able to access this memory? Or is Hyper-V memory usage also limited by 2008 R2 Standard? If Hyper-V can somehow access this memory, is this how it should be setup? Or should the host 2008R2 Standard be upgraded to Enterprise so the Host can utilize the full 64gb? Before I go hog wild and using TFS I wanted to ask some experts so I don't need to reinstall the OS down the road to utilize the additional 32gb. Thanks for any help or links you can share.

    Read the article

  • Where do all the old programmers go?

    - by Tony Lambert
    I know some people move over to management and some die... but where do the rest go and why? One reason people change to management is that in some companies the "Programmer" career path is very short - you can get to be a senior programmer within a few years. Leaving no way to get more money but to become a manager. In other companies project managers and programmers are parallel career paths so your project manager can be your junior. Tony

    Read the article

  • Replacing UISplitViewController. New version isn't added for current orientation, frame size

    - by Justin Williams
    My application has two different modes I am switching between by replacing the two views in a UISplitViewController. This involves instantiating the new views and a new UISplitViewController. I then set the new detail view as the UISplitViewController's delegate. I'm running into an issue where when I replace the view controllers and splitview controller, they are not properly sized or added for the current orientation. For instance, if I have my iPad in UIDeviceOrientationPortraitUpsideDown the view will be upside down when I call addSubview, but will rotate to the proper orientation after a second. In another instance, if I have the device in landscape mode and swap the views, the detail view controller is not fully stretched to the size of the view. If I rotate the device to portrait and back to landscape, its resized properly. The code I'm using to create the new split view and view controllers is as follows. - (void)showNotes { teiphoneAppDelegate *appDelegate = (teiphoneAppDelegate *)[[UIApplication sharedApplication] delegate]; [appDelegate.splitViewController.view removeFromSuperview]; OMSavedScrapsController *notesViewController = [[OMSavedScrapsController alloc] initWithNibName:@"SavedScraps" bundle:nil]; UINavigationController *navController = [[UINavigationController alloc] initWithRootViewController:notesViewController]; ComposeViewController *noteDetailViewController = [[ComposeViewController alloc] initWithNibName:@"ComposeView-iPad" bundle:nil]; UISplitViewController *newSplitVC = [[UISplitViewController alloc] init]; newSplitVC.viewControllers = [NSArray arrayWithObjects:navController, noteDetailViewController, nil]; newSplitVC.delegate = noteDetailViewController; appDelegate.splitViewController = newSplitVC; // Fade in the new split view appDelegate.splitViewController.view.alpha = 0.0f; [appDelegate.window addSubview:appDelegate.splitViewController.view]; [appDelegate.window makeKeyAndVisible]; [UIView beginAnimations:nil context:appDelegate.window]; [UIView setAnimationDuration: 0.5f]; [UIView setAnimationCurve: UIViewAnimationCurveEaseInOut]; appDelegate.splitViewController.view.alpha = 1.0f; [UIView commitAnimations]; [notesViewController release]; [navController release]; [noteDetailViewController release]; [newSplitVC release]; } Any suggestions for how to get the new splitview to add for the device's current orientation and frame?

    Read the article

  • popViewControllerAnimated not working, but back button works

    - by Manu
    Hi, I am creating an application based on the Utility template. The main screen consists of a menu with several buttons, each of which creates a different flip-side view. In one of those flip-side views I also configured a Navigation Controller, which works perfectly as long as I have the NavigationBar activated... I can push the view but I have to use the "back" button to return to my flip-side view, which would be the root of the Navigation Controller. The problem comes if I try to go back using "popViewControllerAnimated", properly configured with a button, instead of the "back" button of the NavigationBar. My application crashes for some reason and I am not able to understand why. I could just use the "back" button in the NavigationBar and forget about the problem, but I would prefer to have my own button in order to go back. My app consists of the following: My APPDelegate.m: - (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions { MenuViewController *menuController = [[MenuViewController alloc] initWithNibName:@"MenuView" bundle:nil]; self.menuViewController = menuController; [menuController release]; menuViewController.view.frame = [UIScreen mainScreen].applicationFrame; [window addSubview:[menuViewController view]]; [window makeKeyAndVisible]; return YES; } MenuViewController.m starting my flip-side view: - (IBAction)showFuelUpliftView { FuelUpliftViewController *controller = [[FuelUpliftViewController alloc] initWithNibName:@"FuelUpliftView" bundle:nil]; controller.delegate = self; controller.title = @"Fuel Uplift"; UINavigationController *navController = [[UINavigationController alloc] initWithRootViewController:controller]; [navController setNavigationBarHidden: NO]; navController.modalTransitionStyle = UIModalTransitionStyleFlipHorizontal; [self presentModalViewController:navController animated:YES]; [navController release]; [controller release]; } FuelUpliftViewController.m, where I push the second view of the NavigationController with a button: - (IBAction)showFuelUplift2View:(id)sender { UIViewController *controller = [[UIViewController alloc] initWithNibName:@"FuelUplift2View" bundle:nil]; controller.title = @"Settings"; [self.navigationController pushViewController:controller animated:YES]; [controller release]; } And finally, my FuelUplift2ViewController.m, where the app crashes when trying to go back: - (IBAction)backFromFuelUplift2View { [self.navigationController popViewControllerAnimated:YES]; } I do not know if all this makes sense, I am currently beginning with my first application and am still learning thanks to the traditional trial an error method. Nevertheless, I cannot see the reason for this problem and would appreciate any help I can get. Thanks very much, Manu

    Read the article

  • (Cannot run program "make": Launching failed) .iam using here MinGW-5.1.6 but not cygwin in eclipse-

    - by kranthikumar
    **** Build of configuration Release for project helloworld **** **** WARNING: The "Release" Configuration may not build **** **** because it uses the "Cygwin GCC" **** **** tool-chain that is unsupported on this system. **** **** Attempting to build... **** (Cannot run program "make": Launching failed) iam using here MinGW-5.1.6 but not cygwin in eclipse-SDK-3.2.2-win32 .pplease any one help me & solve this problem yours faithfully. anilkumar.k

    Read the article

  • msbuild TransformWebConfig task - doesn't work for App.configs?

    - by Jeff D
    I have a windows service that will need to have the same kind of transformations that the web.configs would use, but VS 2010 doesn't seem to support that. I've tried manually adding the App.Release.Config files, and then using msbuil [PROJ] /T:TransformWebConfig /p:Configuration=Release but no transformation is performed. I got a TransformWebConfig folder createed in my obj subdirectory, but that's it. Is this thing hardcoded to only work with web.configs?

    Read the article

  • Is dealloc method called in presentModalViewController when dismiss the controller.

    - by Madan Mohan
    Hi Guys, here the following code is used to view the present modal view controller. [[self navigationController] presentModalViewController:doctorListViewNavigationController animated:YES]; code for dismiss the ModalViewController. -(void)closeAction { [[self navigationController] dismissModalViewControllerAnimated:YES]; } My problem is the dealloc method is not called then I am getting memory issue problems like object allocations, leaks.. (void)dealloc { [doctorList release]; [myTableView release]; [super dealloc]; }

    Read the article

  • Why does 7zip Ignore my InstallPath when making a SFX installer?

    - by Ben
    Currently, I am making a SFX with 7zip using the following config: ;!@Install@!UTF-8! InstallPath="C:\\test" GUIMode="2" RunProgram="7z465.exe" ;!@InstallEnd@! Which is used in: copy /b "C:\Program Files\7-Zip\7zSD.sfx" + config.txt + ".\Release\Setup.7z" .\Release\Setup.exe When I run the resulting Setup.exe, It extracts fine and launches the 7z465.exe as well, but it is still extracting to some 7zip temp folder for the current user! Anyone have any idea why?

    Read the article

< Previous Page | 312 313 314 315 316 317 318 319 320 321 322 323  | Next Page >