Search Results

Search found 21511 results on 861 pages for 'appstore approval process'.

Page 661/861 | < Previous Page | 657 658 659 660 661 662 663 664 665 666 667 668  | Next Page >

  • Windows Server Hyper-V guests cannot see each other on network

    - by Noldorin
    I have a Hyper-V physical machine along with two standard laptops running within my LAN (connected by an ASUS-RT56U router). The physical server runs Windows Hyper-V Server 2008 R2, with two Windows Server 2008 R2 (full) guest VMs installed and running within. Both laptops run Windows 7. All OSs are 64-bit. Opening up Network in Windows Explorer on either of the two laptops displays both of the laptops in the LAN fine. However, neither of the guest VMs on the server (nor the host itself) are displayed. Indeed, the guest VMs can not see each other in Network view either. I can ping all computers (laptops and servers) without problems from within the LAN, but all of the servers are simply not visible from anywhere. In addition, the Network Map screen (accessible via Network and Sharing centre) gives me an error message: "An error happened during the mapping process." And I'm suspecting this might have something to do with how LLTP (Link Layer Topology Protocol) is working on the network. Worth noting though is that before my server was on the network, the Network Map screen displayed fine (as far as I can remember).

    Read the article

  • Why is squid breaking kerberos/NTLM auth?

    - by DonEstefan
    I'm using squid 2.6.22 (Centos 5 Default) as a proxy. Squid seems to break the authentication process for web pages when they require NTLM or Kerberos Auth. I tested with sharepoint 2007 and tried all 3 authentication methods (NTLM, Kerberos, Basic). Accessing the site without squid works in all cases. When I access the same page with squid, then only basic-auth works. Using IE or Firefox desn't make any difference. Squid itself can be used by anybody (no auth_param configured). Its a bit tricky to find solutions online, since most of the topics whirl around auth_param for authenticating users to squid rather than authenticating users to a webpage behind squid. Could anyone help? Edit: Sorry, but my first test was totally screwed up. I tested against the wrong webservers (Memo to myself: always check assumptions before testing). Now I realized that the problem scenario is completely different. Kerberos work for IE Kerberos works for Firefox (after changing "network.negotiate-auth.trusted-uris" in about:config) NTLM works for IE NTLM does NOT work in Firefox (even after changing "network.automatic-ntlm-auth.trusted-uris" in about:config) By the way: The feature that provides NTLM-passthrough in squid is called "connection pinning" and the HTTP header "Proxy-support: Session-based-authentication""

    Read the article

  • BGP Multihomed/Multi-location best practice

    - by Tom O'Connor
    We're in the process of designing a new iteration of our network where we improve resilliency by adding a second datacentre. We'll be adding a second datacentre, with an identical configuration of servers as our primary location. To achieve network connectivity, we're looking into a couple of possible methods. See earlier questions http://serverfault.com/questions/86736/best-way-to-improve-resilience and http://serverfault.com/questions/101582/dns-round-robin-failover-and-load-balancing I'm pretty convinced that BGP is the right way to go about this, and this question is not about RRDNS. 1) If we have 2 locations, do we announce the same IP address block from both locations? 2) If we did this, but had a management ssh interface on x.x.x.50 from datacentre A, but it was on x.x.x.150 in datacentre B. What is the best practice mechanism for achieving this? Because if I were nearest to A, then all my traffic would go to x.50, but if i attempted to connect to x.150, I'd not be able to connect, because this address wouldn't be valid at A, but only at B. Is the best solution to announce 2 different netblocks, one at each location, facilitating the need for RRDNS, or to announce a single block, and run some form of VPN between the two sites for managment traffic?

    Read the article

  • managing a high traffic media sharing website

    - by Jordan Westerman
    i'm in the process of developing a website that i predict will generate a lot of traffic. the site will be similar to many other sites offering free media streaming: mp3's. we are going to start with a pretty minimal amount of media to share, but the basic idea is that artists will set up a profile page with music they have made available for consumers to visit the page and listen to the music. we are starting with just a handful of artists, but i think that this project will generate more and more artist pages. eventually i'd like to set it up so consumers can create personalized playlists. how can i best prepare server space and bandwidth capabilities? i have a small team of web designers and programmers working on the site, as i am pretty illiterate when it comes to site management. as the ring leader of this organization, i am more or less looking for financial requirements and monthly burn rate estimates. i don't have a ton of capitol to start with, putting together a business plan, but i am seeking investments. i have a game plan to grow fast enough to be successful, and slow enough to manage the financial growth requirements. any questions i may have failed to ask myself? is it realistic to start this project on a shared server, and upgrade? any financial advice you think i can use? i really appreciate any advice given, as this is my first business venture. thank you all in advance. Jordan Westerman D.B.A. Badfish Productions, LLC

    Read the article

  • Server hang - data loss on reboot, post mortem analysis

    - by rovangju
    A development server I'm responsible for (ext3 on raid 5 w/Debian Squeeze) froze up over the weekend and I was forced to reset it, as in unresponsive from KVM/physical keyboard access, no eth devices responding, etc. Not even the backup process ran (Figures, the one time I don't check for confirmation) So after the reset, it turns out that every trace of disk IO activity that should have happened for a period of ~24H is completely gone. The log files have a big gap in the dates and times. As if the writes were never committed to disk, no processes seemed to have run. Luckily it was a weekend and nothing of value would have been lost and I don't suspect a hack. What can I do in post mortem to this event - to prevent it from ever happening again? I've seen this happen before on a completely different machine running FreeBSD. I am rounding up the disk checking tools right now - but there must be more going on! Mount options: /dev/sda1 on / type ext3 (rw,errors=remount-ro) Kernel: Linux dev 2.6.32-5-686-bigmem Disk/Inodes: 13%/3%

    Read the article

  • mrepo and grouplist/groupinstall?, mrepo not working as expected with group

    - by user52874
    All, I'm trying to set up mrepo so we can have internal repositories. After quite the slog, things seem to be working as expected EXCEPT for groups. From man createrepo: EXAMPLES Here is an example of a repository with a groups file. Note that the groups file should be in the same directory as the rpm packages (i.e. /path/to/rpms/comps.xml). createrepo -g comps.xml /path/to/rpms So here's what I'm doing: wget -c http://ftp.scientificlinux.org/linux/scientific/6/x86_64/os/repodata/comps-sl6-x86_64.xml cp comps-sl6-x86_64.xml /var/mrepo/SL6-x86_64/os/Packages/comps-sl6-x86_64.xml createrepo -g comps-sl6-x86_64.xml /var/mrepo/SL6-x86_64/os/Packages/ lots of output, no apparent errors or warnings BUT.. from a client: yum grouplist Loaded plugins: refresh-packagekit Setting up Group Process Error: No group data available for configured repositories Here's /etc/mrepo.conf: ### Configuration file for mrepo ### The [main] section allows to override mrepo's default settings ### The mrepo-example.conf gives an overview of all the possible settings [main] srcdir = /var/mrepo wwwdir = /var/www/mrepo confdir = /etc/mrepo.conf.d arch = x86_64 mailto = root@localhost smtp-server = localhost pxelinux = /usr/lib/syslinux/pxelinux.0 tftpdir = /tftpboot #rhnlogin = username:password ### Any other section is considered a definition for a distribution ### You can put distribution sections in /etc/mrepo.conf.d ### Examples can be found in the documentation. Here's /etc/mrepo.conf.d/sl6.mrepo: ### Scientific Linux 6 [SL6] name = Scientific Linux 6 release = 6 arch = x86_64 metadata = repomd repoview os = rsync://rsync.scientificlinux.org/scientific/$release/$arch/os/ updates = rsync://rsync.scientificlinux.org/scientific/$release/$arch/updates/ security = rsync://rsync.scientificlinux.org/scientific/$release/$arch/updates/security/ fastbugs = rsync://rsync.scientificlinux.org/scientific/$release/$arch/updates/fastbugs/

    Read the article

  • IIS running but not serving content

    - by Kyle
    I have an internal dev server running Windows 2k8 R2 with the Web and FTP Server roles set up which won't serve any content at all. Trying to connect from another host via telnet yields 'connection failed': c:\>telnet devserver 80 Connecting To devserver...Could not open connection to the host, on port 80: Conn ect failed Using netstat -an | find "80" on the dev server returns no connections on port 80 (a few on 1801, etc) tcpview confirms this, listing no open connections on port 80. The following services related to the Web role are running: World Wide Web Publishing Service Application Host Helper Service Microsoft FTP Service (ftp connections to port 21 are granted) Windows Process Activation Service The default website bindings are: Type Host Name Port IP Address Binding Information http 80 * net.tcp 808:* net.pipe * net.msmq localhost msmq.formatname localhost When setting up a new application under the default site, the test function passes both connection/authorisation only if the 'connect as' user is local admin, otherwise the test errors with 'invalid application path'. At no point is the W3SVC service PID bound to port 80 (it is running and bound to 21 for ftp). There are no W3SVC log directory at c:\inetpub\logs\LogFiles\ (only FTPSVC2), and no HTTPERR directory at c:\windows\system32\ or c:\windows\system32\logfiles\. There do not appear to be any related errors in the event logs. I'd really appreciate any thoughts on be a good place dig into what's (not) going on here!

    Read the article

  • Does Windows XP automatically reassemble UDP fragments?

    - by Matt Davis
    I've got a Windows application that receives and processes XML messages transmitted via UDP. The application collects the data using Windows "raw" sockets, so the entire layer 3 packet is visible. We've recently run across a problem that has me stumped. If the XML messages (i.e., UDP packets) are large (i.e., 1500 bytes), they get fragmented as expected. Ordinarily, this will cause the XML processor to fail because it attempts to process each UDP packet as if it is a complete XML message. This is a known short-coming in the system at this stage of its development. On Windows 7, this is exactly what happens. The fragments are received and logged, but no processing occurs. On Windows XP, however, the same fragments are seen, and the XML processor seems to handle everything just fine. Does Windows XP automatically reassemble UDP fragments? I guess I could expect this for a normal UDP socket, but it's not expected behavior for a "raw" socket, IMO. Further, if this is the case on Windows XP, why isn't the behavior the same on Windows 7? Is there a way to enable this?

    Read the article

  • WAMP starts Apache or Mysql, but not both?

    - by ladenedge
    When I install WAMP, the Apache and Mysql services are set to run as the LocalService user and all works well. However, because I need to access remote UNC paths in my PHP code, I need to run at least Apache as a user that exists on both the local host and the remote host - I'll call him WampUser. When both Apache and Mysql are set to start as WampUser, I cannot start both at the same time. If both are stopped, I can start either successfully. When I attempt to start the other, I get Error 1053: The service did not respond to the start or control request in a timely fashion. This error appears immediately - there is no timeout. When at least one of the services is set to start as LocalService, both start fine. I can, therefore, solve my problem by setting Apache to WampUser and Mysql to LocalService, but I'm more interested in why this is happening in the first place. I'm especially curious because this situation does not occur on other servers - something I've done to this server has made these two services exclusive when running as the same user. Here are some miscellaneous data points: I am using Windows Server 2003. I've provided recursive Full Control to the C:\wamp directory for WampUser. Nothing appears in the event log after the service fails. No log entries appear in either the Mysql log or the Apache error log. Neither application appears in the process list when the appropriate service is stopped. Any ideas?

    Read the article

  • Windows 7: Image thumbnails fail to appear

    - by Fopedush
    All right, I've got a pretty strange one here. Since I installed Windows 7 on this machine some time ago, image thumbnails have never worked properly. For the vast majority of images, they completely fail to show up, showing the icon of the default image viewing application instead. Please note that the “Always show icons, never thumbnails” option in folder options is not checked. I've taken a screenshot demonstrating the problem, here: Sometimes, a few image thumbnails will show up correctly, maybe about one in ten, with the rest failing as well. Another screenshot, with a handful of thumbnails visible, can be seen here: Windows explorer does not appear to make any effort to populate the missing thumbnails, and there is no appreciable CPU usage. I can leave a window with missing thumbnails open all night and they will still never appear. Newly created images never generate thumbnails, only images that have been on the system since day one will occasionally show them. This leads me to believe that explorer is set to show thumbnails, but whatever process is supposed to be in charge of actually generating them has failed somehow. In previous versions of windows, explorer.exe itself was responsible for thumbnail generation – has this changed? Any suggestions at all – even if you aren't sure that they will work – are welcome. I'd hate to have to wipe and reinstall on this machine for such a minor annoyance.

    Read the article

  • virtualbox snapshot size

    - by intuited
    I've started using Windows 7 under VirtualBox on an Ubuntu 10.10 host. I took about 6 snapshots over the course of setting up the VM from the Windows restore image that came with the computer. My installations were more or less limited to windows updates, antivirus, and the VB Guest Additions. I uninstalled much more than I installed. The VM was running for about 24 hours total. The snapshots increased in size at a worrisome rate, even when the machine was idle: the snapshot .vdi file for the period between 11:22 PM and 9:02 AM is 6 gigs in size; during that time very little happened. The other .vdi files are between 0.5 and 3 GB, most between 1 and 2 GB. The corresponding .sav files are between 0.5 and 1 GB. The Internet connection where I was doing this is limited to 30KB/s download, which, constantly saturated, works out to less than 3 GB per 24 hour period. Is this normal? Is there something that can be done to make snapshots more practical? update On starting up the VM again, I've noticed that mscorsvw is using significant processing time. Apparently this process [precompiles .NET assemblies]. This may have been going on during the period when I was taking snapshots, which might explain some of the snapshot size increase. I would be somewhat surprised to learn that this could be responsible for over 10 GB of additional disk usage, or that it would run for roughly 24 hours. Is this possible?

    Read the article

  • TeamCity sends inadequate responses after Selenium tests

    - by Dmitriy Sukharev
    I have a TeamCity 7.0.2 at CentOS 6.2 server without X Server. I've installed x11-fonts*, xvfb, firefox, xauth, extracted env. variable DISPLAY=localhost:1, and started xvfb. After that I could start Selenium tests using maven. Tests are executed, but there's an issue with TeamCity. Usually TeamCity starts hehaves absolutely inadequate (it confuses images at the page, sends xml or strange text ampersants and numbers in responses and is a bit slower), also tests are executed 4 times slower (1h 15m) at server than at tester Windows 7-based machine (25m). It worth to notice that tests launch two Jetty servers for tested application (one for REST-services application and another for client). In TeamCity I set JVM command line parameters: -Xms256m -Xmx1224m -XX:MaxPermSize=320m, and Additional Maven command line parameters ends with "-DMAVEN_OPTS=-Xmx1024m" (without quotes). Also both web-services and TeamCity uses the same Oracle server (but different Oracle users). Finally TeamCity and its build agent is at the same server. Server has only 4GB of RAM, but during testing there're 400MB of RAM and 1.2GB of swap. TeamCity and Firefox uses about 65% of CPU during testing. There's no firefox process after end of testing. My knowledge about Selenium is weak. I only know that we use 2.20.0 version of selenium-java maven dependency. Please help me to determine why TeamCity sends wrong responces after Selenium tests. I've tried to give you all information I have, but feel free to ask me for more information.

    Read the article

  • PostgreSQL lots of large Arrays and Writes

    - by strife911
    Hi, I am running a python program that spawns 8 threads and as each thread launch its own postmaster process via psycopg2. This is to maximize the use of my CPU-cores (8). Each thread call a series of SQL Functions. Most of these functions go through many thousands of rows each associated to a large FLOAT8[] Array (250-300) values by using unnest() and multiplying each FLOAT8 by an another FLOAT8 associated to each row. This Array approach minimized the size of the Indexes and the Tables. The Function ends with an Insert into another Table of a row of the same form (pk INT4, array FLOAT8[]). Some SQL Functions called by python will Update a row of these kind of Tables (with large Arrays). Now I currently have configured PostgreSQL to use most of the memory for cache (effective_cache_size of 57 GB I think) and only a small amount of it for shared memory (1GB I think). First, I was wondering what the difference between Cache and Shared memory was in regards to PostgreSQL (and my application). What I have noticed is that only about 20-40% of my total CPU processing power is used during the most Read intensive parts of the application (Select unnest(array) etc). So secondly, I was wondering what I could do to improve this so that 100% of the CPU is used. Based on my observations, it does not seem to have anything to do with python or its GIL. Thanks

    Read the article

  • SQL Server Management Studio not scripting all objects

    - by Ian Boyd
    i've been attempting to script a database using SQL Server 2005 Management Studio. i cannot get it to script some objects. It scripts others, but skips some. i can provide detailed screen shots the options being selected including all tables the folder where the script files will go the folder being empty before scripting the scripting process saying Sucess when scripting a table the destination folder no longer empty, with a hundred or so script files the script of some tables not being in the folder. And earlier SSMS would not script some views. Is this a known thing that the the Generate Scripts task does not generate scripts? Update Known issue on Microsoft Connect, but Microsoft couldn't repro the steps, so they closed closed the ticket. Fails on SQL Server 2005, also fails on SQL Server 2008. Update Two Some basic questions: 1.What version of SQL Server? Microsoft SQL Server 2000 - 8.00.194 (Intel X86) Microsoft SQL Server 2005 - 9.00.3042.00 (Intel X86) Microsoft SQL Server 2008 - 10.0.2531.0 (Intel X86) Microsoft SQL Server 2005 Management Studio: 9.00.4035.00 Microsoft SQL Server 2008 Management Studio: 10.0.1600.22 2.What O/S are you running on? Windows Server 2000 Windows Server 2003 Windows Server 2008 3.How are you logging in to SQL server? sa/password Trusted authentication 4.Have you verified your account has full access to all objects? Yes, i have access to all objects. 5.Can you use the objects that fail to script? (eg: select top(10) * from nonScriptingTable) Yes, all objects work fine. SQL Server Enterprise Manager can script the objects fine. Update Three They fail no matter what version of SQL Server you script against. It wasn't a problem in Enterprise Manager: Client Tools SQL Server 2000 SQL Server 2005 SQL Server 2008 ============ =============== =============== =============== 2000 Yes n/a n/a 2005 No No No 2008 No No No Update Four No errors found in the database using: DBCC CHECKDB go DBCC CHECKCONSTRAINTS go DBCC CHECKFILEGROUP go DBCC CHECKIDENT go DBCC CHECKCATALOG go EXECUTE sp_msforeachtable 'DBCC CHECKTABLE (''?'')' Honk if you hate SSMS.

    Read the article

  • Port translation in router causing some email to fail

    - by user22037
    We are in the process of setting up a spam filter (SAVASM). One change we are making is to push incoming email on port 25 through our spam filter/server but have users actually send their email on a different port. I am attempting to make this happen by using port address translation to send port 25 traffic to the SAVASM server IP. As a step in making this change I setup port translation without actually changing the IP addresses. The NAT rules for the email server went from one Static NAT rule with no port specified, to multiple Static NAT rules each with a port or group matching the Access Rules for that server (smtp, pop3, http, https, and some other custom ports). The problem we are running into is confusing. Some outgoing mail through this server is failing when the router has the multiple NAT rules with port translation settings. Email goes through fine FROM our email to our internal accounts and to Gmail. However email fails when FROM our client's email address TO our client's email or their personal Comcast. The only situation that worked for them was if they changed FROM to Comcast and then messages went through fine to both Comcast and the client's accounts. Switching back to regular Static NAT rule everything then worked for them. Does anyone have a clue as to what might be going on? We are on a Cisco ASA 5500 box.

    Read the article

  • How do you install .net4 on a Server 2008 r2 machine through psremoting in powershell?

    - by Jake
    I need to write a script that installs .net 4 remotely using powershell to a group of Server 2008 R2 machines. I based my script off of http://social.technet.microsoft.com/Forums/en-US/winserverpowershell/thread/3045eb24-7739-4695-ae94-5aa7052119fd/. enter-pssession -computername localhost $arglist = "/q /norestart /log C:\Users\tempuser\Desktop\dotnetfx4" $filepath = "C:\Users\tempuser\Desktop\dotNetFx40_Full_setup.exe" Start-Process -FilePath $filepath -ArgumentList $arglist -Wait -PassThru After running the command I would get the following log errors (running the same lines locally would install .net without error): Action: Downloading Item Failed to CreateJob : hr= 0x80200014 Action: Performing actions on all Items Action: Performing Action on Exe at C:\Users\tempuser\Desktop\dotnetfx4\SetupUtility.exe Exe (C:\Users\tempuser\Desktop\dotnetfx4\SetupUtility.exe) succeeded. Exe Log File: dd_SetupUtility.txt Action complete Action: ServiceControl - Stop clr_optimization_v2.0.50727_32 ServiceControl operation succeeded! Action complete Action: ServiceControl - Stop clr_optimization_v2.0.50727_64 ServiceControl operation succeeded! Action complete Action: Performing Action on Exe at C:\Users\tempuser\AppData\Local\Temp\Microsoft .NET Framework 4 Setup_4.0.30319\Windows6.1-KB958488-v6001-x64.msu Exe (C:\Users\tempuser\AppData\Local\Temp\Microsoft .NET Framework 4 Setup_4.0.30319\Windows6.1-KB958488-v6001-x64.msu) failed with 0x5 - Access is denied. . PerformOperation on exe returned exit code 5 (translates to HRESULT = 0x5) Action complete OnFailureBehavior for this item is to Rollback. Action: Performing actions on all Items Action complete Action complete Action: Downloading http://go.microsoft.com/fwlink/?LinkId=164184&clcid=0x409 using WinHttp WinHttpDetectAutoProxyConfigUrl failed with error: 12180 Unable to retrieve Proxy information although WinHttpGetIEProxyConfigForCurrentUser called succeeded Action complete C:\Users\tempuser\AppData\Local\Temp\Microsoft .NET Framework 4 Setup_4.0.30319\TMPF279.tmp.exe: Verifying signature for netfx_Core.mzz C:\Users\tempuser\AppData\Local\Temp\Microsoft .NET Framework 4 Setup_4.0.30319\TMPF279.tmp.exe Signature verified successfully for netfx_Core.mzz Action complete Decompression completed with code: 16389 Decompression of payload failed: C:\Users\tempuser\AppData\Local\Temp\Microsoft .NET Framework 4 Setup_4.0.30319\netfx_Core.mzz Action complete Final Result: Installation failed with error code: (0x80074005) (Elapsed time: 0 00:00:28). Is there some security setting or perhaps something else I've missed?

    Read the article

  • Switch Windows 8 from a hybrid MBR/GPT => GPT only on Macbook Pro Retina

    - by Sid
    I used DiskUtility+Bootcamp Wizard to setup my hard drive for Windows 8 (final MSDN). Somewhere in that process, the Apple tools turned my GPT disk into a hybrid MBR/GPT. All my 4 primary MBR partitions are used up, so when I try turning on Bitlocker in Windows 8, it complains about not finding a System drive. I know on Windows 8 the Bitlocker setup tries to create the 200(?)MB system partition if it's missing. However with all 4 partitions filled I suspect it can't create system drive = it can't find it = throws back an error like "BitLocker Setup could not find a target system drive. You may need to manually prepare your drive for BitLocker". I've already tried disabling hibernation, swap file etc. Now I'm thinking that if I were to get rid of the MBR scheme altogether, perhaps I can be alright within the GPT world without MBR's 4 primary partitions limit. So, how can I get rid of the MBR tables on the hybrid scheme in a manner that still leaves Mac OS and Windows 8 in working conditions? Details: Hardware is the MacbookPro Retina. Primary MBR partitions are consumed as follows: EFI partition HFS+ partition (=encrypted, therefore ="Apple_CoreStorage") HFS+ partition (Recovery partition, contains unencrypted Mac bootloader) NTFS partition (Windows8 all-in-one partition) diskutil list output sid-mbpr:~ sid$ diskutil list /dev/disk0 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *251.0 GB disk0 1: EFI 209.7 MB disk0s1 2: Apple_CoreStorage 160.0 GB disk0s2 3: Apple_Boot Recovery HD 650.0 MB disk0s3 4: Microsoft Basic Data Win8 90.1 GB disk0s4 GPT vs MBR addresses sid-mbpr:~ sid$ sudo gptsync /dev/rdisk0 Password: Current GPT partition table: # Start LBA End LBA Type 1 40 409639 EFI System (FAT) 2 409640 312909639 Unknown 3 312909640 314179175 Mac OS X Boot 4 314179584 490233855 Basic Data Current MBR partition table: # A Start LBA End LBA Type 1 1 409639 ee EFI Protective 2 409640 312909639 ac Apple RAID 3 312909640 314179175 ab Mac OS X Boot 4 * 314179584 490233855 07 NTFS/HPFS Status: GPT partition of type 'Unknown' found, will not touch this disk.** **: Ignore this message, the gptsync tool is old and doesn't understand the UUID for "Apple_CoreStorage" / FileVault2 partitions. Since LBA addresses are alright, safe to ignore this message.

    Read the article

  • I get a Segmentation fault when doing apt-get util-linux

    - by Adam
    I've found that a lot of upgrade commands and Apache on my system are failing with Segmentation faults. I don't know if this is the main one, but a lot of packages depend on util-linux: root@myUbuntuHardyHeronServer:~# apt-get install util-linux Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be upgraded: util-linux 1 upgraded, 0 newly installed, 0 to remove and 72 not upgraded. 20 not fully installed or removed. Need to get 0B/441kB of archives. After this operation, 0B of additional disk space will be used. (Reading database ... 20547 files and directories currently installed.) Preparing to replace util-linux 2.13.1-5ubuntu2 (using .../util-linux_2.13.1-5ub untu3.1_i386.deb) ... Unpacking replacement util-linux ... Segmentation fault dpkg: warning - old post-removal script returned error exit status 139 dpkg - trying script from the new package instead ... Segmentation fault dpkg: error processing /var/cache/apt/archives/util-linux_2.13.1-5ubuntu3.1_i386 .deb (--unpack): subprocess new post-removal script returned error exit status 139 Segmentation fault dpkg: error while cleaning up: subprocess post-removal script returned error exit status 139 Errors were encountered while processing: /var/cache/apt/archives/util-linux_2.13.1-5ubuntu3.1_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • How do I upgrade the BIOS to boot the motherboard when the CPU is not suported?

    - by Matt
    So I have a Tyan S8225 motherboard with a Valencia (Opteron 4200) CPU. Two of us have tried everything. We have even swapped the whole motherboard, Power supply, memory, even a different CPU (still Valencia). We even tried it without any memory installed and with only a single CPU. There are no beep codes as I suspect even those are controlled by the BIOS boot process. We put the whole thing on the bench with just the power supply, VGA, keyboard and network (for IPMI) connected. The IPMI is not even showing the BIOS starting, but the IPMI is working. After some hunting around, I discovered the claim on the website is that the Valencia CPU's are not supported on older BIOS revisions. For a start, I don't know what the bios revision is and if it's older but it's the only thing left. Could the BIOS be causing a board not to boot at all? If that's the case, then is there any other way to update the BIOS without buying an old CPU only to be put back in a box just to update the BIOS? Yes, we even tried updating the BIOS through IPMI but you can't do that either.

    Read the article

  • Windows Server 2008 R2 install reboots unexpectedly during "Completing installation" phase

    - by knda
    I am attempting to install Windows Server 2008 R2 onto a Cisco UCS C201 M2 rack mounted server but am having major difficulties and wondering if anyone has some insight or items they could recommend for me to look at to get this one resolved. Installation is being attempted via the Cisco remote console (using CIMC's Virtual dvd-rom).. following the first phase of Setup where the installation files are copied to the target hard drive, then a reboot occurs to load Setup from the HDD, mid-way in the "Completing Installation" phase the system then reboots unexpectedly. System configuration Cisco UCS C201 M2 (2RU rack mounted server) 16GB RAM, 2x 73GB 15K SAS, 4x 300GB 10k SAS Add-on cards - Intel quad-port GigE card (no fibre channel cards) Storage - LSI MegaRAID SAS 9261-8i. onboard SATA is disabled (no SATA drives connected) KVM - Belkin No physical DVD-ROM.. :( I have... Run memtest86+, no RAM faults Disabled/enabled SATA support (BIOS) Attempted install from USB DVD-ROM, no effect Attempted unattended install scripted via Cisco Configuration Manager DVD provided Removed Belkin KVM in case that was causing drama Discovered that the Cisco website is "awesome" for searching for PDFs/Drivers cough, reverted back to Google Downloaded latest LSI drivers from LSI's site and used during Server 2008 install checked Windows ISO against checksum's from MS site checked Windows ISO by using it for an install in a VM Running out of ways to troubleshoot this as I am not sure how to enable any sort of 'verbose' mode during the setup process. Next step I have planned is to remove the Intel NIC and try the installation again.. Edit: Problem was the "Cisco INTEL QUAD PT GBE" (1000/PT) .. will have to see if this card is faulty or if it's just drivers.. thanks for the help.

    Read the article

  • Determine the time difference between two linux servers

    - by Paul
    I am troubleshooting a latency network issue on a network. It is probably a nic or cabling issue, but while I was going through the process of figuring it out, I was looking at the timings of a ping packet leaving a network card and arriving at another server. Both linux. So I have tcpdump running on both, and I issue a ping from one to the other, and back again, and looking at the timing differences might have shed light on where the latency is coming from. It is an academic exercise now, as I need to eliminate some more fundamental causes, but I was curious as to how this could be achieved. Given that ntpd is installed and running on two servers, how can I confirm the current time discrepency between the two servers, to whatever level of accuracy is possible - given that we are talking about latency on a local lan, which is ideally a millisecond or so. NTP itself is accurate to a couple of ms under good conditions, and as both servers are in the same environment, they should (presumably) achieve a similar level of accuracy, and so should have a time discrepency between them of a only few ms - but how can I check this?

    Read the article

  • schedule backup and restore of SSAS 2008 database

    - by Manjot
    Hi, I can backup and restore databases on Microsoft SQL server Analysis Service 2008 using GUI as from Backup SSAS I want to schedule backup and restore it to another server every night. so what i did is : I scripted out the backup and restore process from the GUI. Created a new SQL server agent job in database engine and added a "Run SSAS query" step. Copied the scripts to this step. But it fails. the scripts that the GUI copied out look like: <Backup xmlns="http://schemas.microsoft.com/analysisservices/2003/engine"> <Object> <DatabaseID>DB</DatabaseID> </Object> <File>C:\Backup\DB.abf</File> <AllowOverwrite>true</AllowOverwrite> </Backup> <Restore xmlns="http://schemas.microsoft.com/analysisservices/2003/engine"> <File>\\server\C$\Backup\DB.abf</File> <DatabaseName>DB</DatabaseName> <AllowOverwrite>true</AllowOverwrite> </Restore> Any help please?

    Read the article

  • BIOS corrupted? How to proceed? Acer Travelmate 290

    - by dtlussier
    I have a older ACER Travelmate 290 (manufactured in 2002 or 2003), which I recently tried to upgrade to Ubuntu 9.10. After doing the upgrade process it appeared as if I had a problem with my x server configuration, as on the first reboot post-install I heard the Ubuntu startup sound, but had a black screen. I thought I would then reboot again to drop down into text-mode to trouble shoot the x configuration problem. However, when I tried rebooting, something went wrong and since then when I start up the machine I get absolutely nothing except the first hardware check (i.e. HDD light flashes, CD/DVD drive spins, etc.). Other than that the screen remains totally black and I have no HDD or processor activity at all. I have tried restarting it a number of time holding down all kinds of key combinations to try and coax it into the BIOS (if possible) with no luck. I have also tried putting in both a live Linux disc and a Windows install disk without any luck. With a disk in the drive it will spin for a few seconds and then stop. All this has lead me to suspect that the BIOS is somehow corrupt (not sure about the right terminology). I have tried putting a new BIOS image and installer program downloaded from ACER on a USB key to see if it will run when I start up the machine, but no luck. I'm not sure if this method of interacting/updating/flashing the BIOS will work outside of Windows/DOS as both OSs are mentioned kind of ambiguously in the documentation. I have also taken the laptop case apart and inspected the various cards and cannot find any obviously burned out components. I'm not sure how to proceed at this point in terms of components to try, or how to try and load a new BIOS image onto the board. Any advice here would be great, especially from those with experience with this particular line of laptops. Thanks!

    Read the article

  • Linux Mint reset display resolution from console

    - by wullxz
    I have a Linux Mint 13 Xfce in a VMware Workstation 8 VM and set the resolution from 800x600 to 1280x768 and now I get permanently logged out when I try to login. I knew how to get back to my old resolution back in the xorg.conf days but Linux Mint now uses xrandr which won't display any displays when running # xrandr because X is not running (of course not - I can't login over GUI). I know that there are configuration files in /etc/X11/Xsession.d/ because I configured a debian based thinclient's resolution in a file called /etc/X11/Xsession.d/91configure_display but that file doesn't exist in my Linux Mint VM. So, how do I reset my X screen resolution from console? Edit: I forgot to tell you that I can't change resolution in console: # xrandr -s 800x600 Can't open display This message appears every time I use xrandr or xrandr -s *resolution* Update: I tried what bWowk suggested: # export DISPLAY=:0.0 # xrandr -s 800x600 No protocol specified No protocol specified Can't open display :0.0 So, that doesn't work either. Isn't there a configuration file that is executed every time X starts? X is running btw - ps aux | grep X shows one process /usr/bin/X running.

    Read the article

  • Windows 2008 Standard upgrade to Windows 2008 Enterprise failure

    - by Archit Baweja
    Sidestory, I was in the process of setting up a second Exchange 2010 server for DAG support, when I realized that my box needed Windows 2008 Enterprise edition. The box currently has Windows 2008 Standard Windows update including SP2 Exchange 2010 with CAS, HT, Mailbox roles Domain Services role File Services role. When I try to upgrade to Windows 2008 Enterprise, I initially got a "your current version of windows is more recent than the intallation media", something to that effect. My first guess was it may be SP2 related, so I uninstalled SP2, restarted and tried again. This time it gave me an error to the effect Windows could not configure one or more windows components. Please restart and try the update again. This was at the last stage of the Windows 2008 Enterprise install when it says "Completing installation". So I removed Domain Services role (including demoting it as a DC). However I get the same error again. Anyone see something like this before and have any suggestions? Also , is there a log file the windows upgrade program spits out that I can consult to see what component exactly is interfering? Update 1 Based on some googling I finally found the setup log file, and it seems that Windows setup had an issue determining the .Net 3.0 "feature" being installed or uninstalled. So based of of a win7/vista technet article I'm going to retry the upgrade after removing the .Net 3.0 feature.

    Read the article

< Previous Page | 657 658 659 660 661 662 663 664 665 666 667 668  | Next Page >