Search Results

Search found 27958 results on 1119 pages for 'failed to load viewstate'.

Page 699/1119 | < Previous Page | 695 696 697 698 699 700 701 702 703 704 705 706  | Next Page >

  • HAPROXY per domain redirection

    - by SecondThought
    I'm trying to redirect requests to my load balancer by domain name with acl and hdr_dom, to a separate backend. The redirection works ok with the first request - 'GET /' (the destination server is a WordPress site) but when the client asks for the assets ('GET /blablabla/style.css' for example) the haproxy doesn't redirect it to the right backend anymore, but to the default one, with . In the haproxy log I can see the correct host that the request is for (the one that I defined in hdr_dom) but it's like that since the GET request itself is relative (I mean not containing the domain but only from the /blablabla and forth), haproxy doesn't recognize it with the hdr_dom. I'm just guessing here.. Please help...

    Read the article

  • Xorg becomes unkillable at 3AM

    - by chew socks
    Most nights, some time in the hour of 3AM my xorg process will increase to 100% cpu and gpu load will also increase to 100%. The process also becomes unkillable. I cannot sudo kill -9 it or get back control with sudo service lightdm restart. I also cannot switch to to a tty screen with ctrl + alt + f1. To reboot I have to log in with ssh, but this is not perfect because if I reboot while it is doing this my ZFS pool will fail to mount when it comes back up ( that is where my /home is ). Does anyone have any ideas as to why I can't stop and restart xorg, or even better, know why this is happening? Thanks NOTE: For anyone who comes looking for the same problem. I disabled catalyst AI and made it through the night. I've been up for 1 day 3 hours now. My record for this month is 2 days and 19 hours without a problem. My all time record is 6 days without a crash. I'll post here if it crashes again or I'm able to set a new record.

    Read the article

  • VMware Kernel Module Updater hangs on Ubuntu 13.04

    VMware Player has a nice auto-detection of kernel changes, and requests the user to compile the required modules in order to load them. This happens from time to time after a regular update of your system. Usually, the dialog of VMware Kernel Module Updater pops up, asks for root access authentication, and completes the compilation. VMware Player or Workstation checks if modules for the active kernel are available. In theory this is supposed to work flawlessly but in reality there are pitfalls occassionally. With the recent upgrade to Ubuntu 13.04 Raring Ringtail and the latest kernel 3.8.0-21 the actual VMware Kernel Module Updater simply disappeared and the application wouldn't start as expected. When you launch VMware Player as super user (root) the dialog would stall like so: VMware Kernel Module Updater stalls while stopping the services Prior to version 5.x of VMware Player or version 7.x of VMware Workstation you would run a command like: $ sudo vmware-config.pl to resolve the module version conflict but this doesn't work anyway. Solution Instead, you have to execute the following line in a terminal or console window: $ sudo vmware-modconfig --console --install-all Those switches are (as of writing this article) not documented in the output of the --help switch. But VMware already documented this procedure in their knowledge base: VMware Workstation stops functioning after updating the kernel on a Linux host (1002411). Update As of today I had the first kernel upgrade to version 3.8.0-22 in Ubuntu 13.04. Don't even try it without vmware-modconfig...

    Read the article

  • How to fix /etc/ folder on Mac OS X

    - by justinhj
    I was following a tutorial which had this command to create a launchd.conf file in /etc/ sudo echo "some command" /etc/launchd.conf But it wouldn't work, I got permission denied after entering my admin password. So it seemed like the permissions for the link were wrong, so I did 'sudo chmod 755 /etc/' But now I can't load a terminal, I get the error The administrator has set your shell to an illegal value If I tried to sudo a command now I get sudo: can't open /private/etc/sudoers: Permission denied sudo: no valid sudoers sources found, quitting Process tramp/sudo root@localhost exited abnormally with code 1 This is what the link /etc looks like, what should it look like, and how do I restore it? lrwxr-xr-x 1 root wheel 11 Jul 21 2011 etc - private/etc /private/etc ... drw-r--r-- 111 root wheel 3774 Mar 26 02:25 etc edit: I'm using Mac OS X 10.7.3

    Read the article

  • Is Innovation Dead?

    - by wulfers
    My question is has innovation died?  For large businesses that do not have a vibrant, and fearless leadership (see Apple under Steve jobs), I think is has.  If you look at the organizational charts for many of the large corporate megaliths you will see a plethora of middle managers who are so risk averse that innovation (any change involves risk) is choked off since there are no innovation champions in the middle layers.  And innovation driven top down can only happen when you have a visionary in the top ranks, and that is also very rare.So where is actual innovation happening, at the bottom layer, the people who live in the trenches…   The people who live for a challenge. So how can big business leverage this innovation layer?  Remove the middle management layer.   Provide an innovation champion who has an R&D budget and is tasked with working with the bottom layer of a company, the engineers, developers  and business analysts that live on the edge (Where the corporate tires meet the road). Here are two innovation failures I will tell you about, and both have been impacted by a company so risk averse it is starting to fail in its primary business ventures: This company initiated an innovation process several years ago.  The process was driven companywide with team managers being the central points of collection of innovative ideas.  These managers were given no budget to do anything with these ideas.  There was no process or incentive for these managers to drive it about their team.  This lasted close to a year and the innovation program slowly slipped into oblivion…. A second example:  This same company failed an attempt to market a consumer product in a line where there was already a major market leader.  This product was under development for several years and needed to provide some major device differentiation form the current market leader.  This same company had a large Lead Technologist community made up of real innovators in all areas of technology.  Did this same company leverage the skills and experience of this internal community,   NO!!! So to wrap this up, if large companies really want to survive, then they need to start acting like a small company.  Support those innovators and risk takers!  Reward them by implementing their innovative ideas.  Champion (from the top down) innovation (found at the bottom) in your companies.  Remember if you stand still you are really falling behind.Do it now!  Take a risk!

    Read the article

  • Sample domain model for online store

    - by Carel
    We are a group of 4 software development students currently studying at the Cape Peninsula University of Technology. Currently, we are tasked with developing a web application that functions as a online store. We decided to do the back-end in Java while making use of Google Guice for persistence(which is mostly irrelevant for my question). The general idea so far to use PHP to create the website. We decided that we would like to try, after handing in the project, and register a business to actually implement the website. The problem we have been experiencing is with the domain model. These are mostly small issues, however they are starting to impact the schedule of our project. Since we are all young IT students, we have virtually no experience in the business world. As such, we spend quite a significant amount of time planning the domain model in the first place. Now, some of the issues we're picking up is say the reference between the Customer entity and the order entity. Currently, we don't have the customer id in the order entity and we have a list of order entities in the customer entity. Lately, I have wondered if the persistence mechanism will put the client id physically in the order table, even if it's not in the entity? So, I started wondering, if you load a customer object, it will search the entire order table for orders with the customer's id. Now, say you have 10 000 customers and 500 000 orders, won't this take an extremely long time? There are also some business processes that I'm not completely clear on. Finally, my question is: does anyone know of a sample domain model out there that is similar to what we're trying to achieve that will be safe to look at as a reference? I don't want to be accused of stealing anybody's intellectual property, especially since we might implement this as a business.

    Read the article

  • Windows 7 can ping but can't see device on other subnet

    - by user192702
    I have 2 Windows 7 on 2 different subnets but 1 of them is unable to reach a NAS. The topology is as follow. Any idea why this is the case? Is there some Windows settings I need to apply? Subnet 1 - PC 1 - NAS Subnet 2 - PC 2 PC 1 is able to do the following: - Load the admin page on the browser. - Show the NAS under Windows Explorer - Network. - Access the NAS when typed in \\ in Windows Explorer. PC 2 is unable to do any of the above 3. It can however ping the NAS and get a response.

    Read the article

  • What's static.ak.fbcdn.net that appears on the status bar of my browser everytime Facebook is loading?

    - by Maverick
    I find the message: "waiting for static.ak.fbcdn.net..." on the status bar of my browser everytime I load Facebook and many a times even while loading other websites. I searched on net and found out that static.ak.fbcdn.net stands for static akamai facebook content delivery network. I reckon that static.ak.fbcdn.net is the server URL from where Facebook delivers contents to our browser. Am I right? Can anyone elaborate? Also, why does the above mentioned message appear while loading other websites too?

    Read the article

  • linux migration/N high cpu consumption

    - by Alexander
    on my linux appliance based on 3.0.0-14 kernel I got: RPN:/tmp# ps axuf | grep migration root 6 92.9 0.0 0 0 ? S Apr23 2788:33 \_ [migration/0] root 7 99.7 0.0 0 0 ? S Apr23 2993:20 \_ [migration/1] my top is RPN:/tmp# top -b -n1 top - 12:03:41 up 2 days, 2:18, 5 users, load average: 25.76, 25.26, 24.73 Tasks: 171 total, 1 running, 168 sleeping, 0 stopped, 2 zombie Cpu(s): 14.0%us, 12.6%sy, 0.8%ni, 72.0%id, 0.3%wa, 0.0%hi, 0.3%si, 0.0%st Mem: 1543032k total, 1264728k used, 278304k free, 25308k buffers Swap: 0k total, 0k used, 0k free, 183168k cached My question: why processes "migration/N" take so much CPU?

    Read the article

  • What's the easiest way to allow Exchange 2003 remote (no MSO client) users check their Mailbox size?

    - by Myrddin Emrys
    We are migrating from Exchange 2003 with no quota settings to Exchange 2010 with limited mailbox sizes. We are trying to get users to clean their mailboxes prior to the move to reduce the transfer load, as well as to comply with new quotas on the 2010 system. But many users access their mail through webmail only. I cannot see a way for users to access their mail store size in this manner. Has anyone else run into this problem? Is there a good way to easily let users check their own mailbox size? The only thing I've come up with as a workaround is a report that IT generates and mail-merge it out to users daily with their current mailbox size. This is cumbersome and time consuming compared to a way for them to check their own mailbox size however.

    Read the article

  • Servers at remote sites vs. centralized servers?

    - by Boden
    Looking for some opinions here. We've got three physical locations and site-to-site VPN between all three. Currently we've got Windows domain controllers at each location, with roughly 50 clients at each. The domains are currently separate, and we're looking at integrating the three sites. Email (Exchange) will be located at the primary site, and RPD is already being used at the secondary branches to hit the app servers also located at the primary site. The bulk of the local user load at the other two sites is just file sharing. What would the main benefits and drawbacks be of replacing the local domain controllers with NAS devices, and only keeping the domain controller(s) at the primary site? (assuming upgrades are coming regardless) Under what circumstances would you choose one setup over the other?

    Read the article

  • Behaviour of nginx as proxy

    - by HD
    I'm testing nginx with different configurations to replace an architecture working with squid + apache. I know that I can use nginx to manage static requests and load balancing but I'm interested in one particular solution that I don't understand clearly: I'm using 2 nginx servers (balanced) with the proxy_pass setting to pass all requests to an apache server. When one client makes a request to the site one of the nginx servers process it and send it to the apache server. Now, how this behaviour could be an improvement to my system?, it seems that all requests are passing through apache and I don't see benefit at all. What happens when 100 simultaneous connections pass through nginx? The 100 connections will be going to the apache server or is some kind of internal behaviour that allows an small impact into apache?

    Read the article

  • How can I run Ghost from a bootable USB key drive?

    - by Joe Philllips
    I have a laptop that does not have a cd-rom or floppy drive. It is able to boot from USB though. I have a disk image (ghost) of the disk that I need to restore back onto the laptop. I can't find a way to actually run the Ghost utility from a USB key though. I believe the ghost.exe should run from within DOS just fine but I can't seem to create a bootable USB key with DOS on it that allows me to run an EXE. Edit: I managed to find a Ghost utility that I could load from a bootable USB drive. Unfortunately, when I plug in my NTFS external drive (USB), it is not detected.

    Read the article

  • ERP/CRM Systems. Desktop Based ? Web based? [closed]

    - by Parhs
    I have seen 2-3 ERPs in action. I am wondering what is better. Desktop based application or webbased displayed on a browser. My first experience was with a web based ERP when i was 14 years old.. It was web based and terribly slow... For most simple task you had to do lots of clicks... no keyboard support ..... Pages took ages to load. Last year I worked for migrating to a newer computer some old terminal based cobol application. The computer that worked till today and still has no problem was from 1993. The user interface ofcourse was textbased.. The speed that guys placed orders was amazing! just typing the name of the customer , then 5-10 keys to add a product to order.... Comparing to this ERP the page for placing orders Link (click sales orders) seems terribly slow to add a product... No keyboard shortcut works to save what you added and generally I believe you need 4 times more time to place an order compared to the text interface... Having to use both mouse and keyboard for this task is BAD and sadistic... So how can the heck these people ever use a system like that ??? So in the long run desktop application seems the only way... Of course browsers support shortcuts but the way to overide the defaults that browsers uses isn't cross compatible... That is a huge problem. Finnaly, if we MUST/forced use cloud in near future what about keyboard shortcuts?? I feel confused... I have seen converters of desktop applications to browser applications but are SLOW as hell... The question is what about user friendliness? What kind of application would you use?

    Read the article

  • Monitoring host and app parameters in real-time

    - by devopsdude42
    I have a bunch of VMs that I need to monitor in real-time. For all nodes I need to watch host parameters like load, network usage and free memory; and for some I need app-specific metrics too, like redis (some vars from the output of INFO command) and nginx (like requests/sec, avg. request time). Ideally I'd also like to track some parameters from the custom apps that run on these node too. These parameters should get tracked as a bunch of line charts on a dashboard. I checked out graphite and it looks suitable (although the UX and aesthetics looks like it needs some love). But setting up and maintaining graphite looks to be a pain, esp. since we don't have a full-time person just for this. Are there any alternatives? Or at least something that is simpler to setup and will scale? Reasonable paid services are also ok.

    Read the article

  • Fusion CRM Release 7 RCDs and TOIs Now Available!

    - by Richard Lefebvre
    Fusion CRM Release 7 Release Content Documents (RCD) and Transfer of Information (TOI) presentations are now available. In addition, you can find 245 new or changed product features for Release 7 on Oracle Product Features. All the new RCDs and TOIs can be found on the Fusion Learning Center: Customer Relationship Management TOIs - Customer Center, Define Segmentation Strategy, Enterprise Contracts, Oracle Social Network, Sales, and Territory Management Business Process Model (BPM) RCDs - Customer Service, Marketing, Order Fulfillment, and Sales Financials BPM RCDs - Asset Lifecycle Management, Cash and Treasury Management, and Financial Control and Reporting Human Capital Management TOIs - Workforce Development, Compensation, Benefits, Worker Performance, Workforce Profiles, Enterprise Structures, Talent Review, Manage Transaction and Batch Processing, Delete HCM Storage Data, and Load Batch Data BPM RCDs - Compensation Management, Enterprise Information Management, Workforce Deployment, and Workforce Development Procurement TOI - Requisitions BPM RCD - Procurement Project Portfolio Management TOIs - Project Resources, Evaluate and Assign Resources, Maintain Resource Assignments, Manage Resource Demand, Manage Resource Supply, Manage Resource Utilization and Analytics, Project Management, Set Up Project Management BPM RCD - Project Management Supply Chain Management TOIs - Manage New Product Definition and Approval, Manage Product Change Orders, Product Hub, Define Item Class BPM RCDs - Materials Management and Logistics, Product Management and Supply Chain Planning Partners and customers can access the content from the following locations: Partner access: BPM RCDs and TOIs Oracle Partner Network Fusion Learning Center New Feature RCDs Oracle Product Features Customer access: TOIs My Oracle Support (Note:1528594.1) BPM RCDs My Oracle Support (Note:1559828.1) New Feature RCDs Oracle Product Features

    Read the article

  • How to Set Up a Hadoop Cluster Using Oracle Solaris (Hands-On Lab)

    - by Orgad Kimchi
    Oracle Technology Network (OTN) published the "How to Set Up a Hadoop Cluster Using Oracle Solaris" OOW 2013 Hands-On Lab. This hands-on lab presents exercises that demonstrate how to set up an Apache Hadoop cluster using Oracle Solaris 11 technologies such as Oracle Solaris Zones, ZFS, and network virtualization. Key topics include the Hadoop Distributed File System (HDFS) and the Hadoop MapReduce programming model. We will also cover the Hadoop installation process and the cluster building blocks: NameNode, a secondary NameNode, and DataNodes. In addition, you will see how you can combine the Oracle Solaris 11 technologies for better scalability and data security, and you will learn how to load data into the Hadoop cluster and run a MapReduce job. Summary of Lab Exercises This hands-on lab consists of 13 exercises covering various Oracle Solaris and Apache Hadoop technologies:     Install Hadoop.     Edit the Hadoop configuration files.     Configure the Network Time Protocol.     Create the virtual network interfaces (VNICs).     Create the NameNode and the secondary NameNode zones.     Set up the DataNode zones.     Configure the NameNode.     Set up SSH.     Format HDFS from the NameNode.     Start the Hadoop cluster.     Run a MapReduce job.     Secure data at rest using ZFS encryption.     Use Oracle Solaris DTrace for performance monitoring.  Read it now

    Read the article

  • A fix for the design time error in MVVM Light V4.1

    - by Laurent Bugnion
    For those of you who installed V4.1 of MVVM Light and created a project for Windows Phone 8, you will have noticed an error showing up in the design surface (either in Visual Studio designer, or in Expression Blend). The error says: “Could not load type ‘System.ComponentModel.INotifyPropertyChanging’ from assembly ‘mscorlib.extensions’” with additional information about version numbers. The error is caused by an incompatibility between versions of System.Windows.Interactivity. Because this assembly is strongly named, any version incompatibility is causing the kind of error shown here (for an interesting discussion on the strong naming issue, see this thread on Codeplex). I managed to resolve the issue for Windows Phone 8 and will publish a cleaned up installer next week. In the mean time, in order to allow you to continue development, please follow the steps: Download the new DLLs zip package (MVVMLight_V4_1_25_WP8). Right click on the Zip file and select Properties from the context menu. Press the “Unblock” button (if available) and then OK. Right click again on the zip package and select “Extract all…”. Select a known location for the new DLLs. Open the MVVM Light project with the design time error in Visual Studio 2012. Open the References folder in the Solution Explorer. Select the following DLLs: GalaSoft.MvvmLight.dll, GalaSoft.MvvmLight.Extras.dll, Microsoft.Practices.ServiceLocation.dll and System.Windows.Interactivity.dll. Press “delete” and confirm to remove the DLLs from your project. Right click on References and select Add Reference from the context menu. Browse to the folder with the new DLLs. Select the four new DLLs and press OK. Rebuild your application, and open it again in Blend or in the Visual Studio designer. The error should be gone now. In the next few days, as time allows, I will publish a new MSI containing a fixed version of the DLLs as well as a few other improvements. This quick fix should however allow you to continue working on your Windows Phone 8 projects in design mode too.   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • How to help FGLRX detect a device

    - by user113416
    I have HD 4850 card, Ubuntu 12.10 and installed legacy drivers using makson96 ppa. The issue is, that FGLRX can not detect my device and loads vesa bios. I had the same problem on ubuntu 11.10, 12.04 versions. I want to manually help fglrx find a matching device to load as it shoudld do. It is interesting, why does fglrx search for a device in a PCI:0@1:0:1 Bus? in xorg.cof different bus is indicated: Section "Device" Identifier "aticonfig-Device[0]-0" Driver "fglrx" BusID "PCI:1:0:0" EndSection fglrxinfo display: :0.0 screen: 0 OpenGL vendor string: Advanced Micro Devices, Inc. OpenGL renderer string: ATI Radeon HD 4800 Series OpenGL version string: 3.3.11653 Compatibility Profile Context Here is a part of my xorg log: [ 3.846] (II) VESA: driver for VESA chipsets: vesa [ 3.846] (II) FBDEV: driver for framebuffer: fbdev [ 3.846] (++) using VT number 7 [ 3.846] (WW) Falling back to old probe method for fglrx [ 3.883] (II) Loading PCS database from /etc/ati/amdpcsdb [ 3.883] (--) Assigning device section with no busID to primary device [ 3.883] (--) Chipset Supported AMD Graphics Processor (0x9442) found [ 3.884] (WW) fglrx: No matching Device section for instance (BusID PCI:0@1:0:1) found [ 3.884] (II) AMD Video driver is running on a device belonging to a group targeted for this release [ 3.884] (II) AMD Video driver is signed [ 3.884] (II) fglrx(0): pEnt->device->identifier=0xb7791d8f [ 3.884] (WW) Falling back to old probe method for vesa [ 3.884] (WW) Falling back to old probe method for fbdev Thanks in advance.

    Read the article

  • Bitmap Font Displays in Center Always Without Coding it Manually (Fix Coordinate Problem onText)

    - by David Dimalanta
    Is there a way on how to stay the texts in center without manually coding it or something, especially when making an update? I'm making a display for the highest score. Let's say that the score is 9. However, if the score is 9,999,999, the text displays still only at the fixed X and Y coordinate. Is there really a way to stay the text in center especially when there is changes when a player beats the new world record? Here's my code inside Sprite Batch: font.setScale(1.5f); font.draw(batch, "HIGHEST SCORE:", (900/10)*1 + 60, (1280/16)*10); font.draw(batch, "" + 9999999 + "", (900/10)*4, (1280/16)*8); batch.draw(grid_guide, 0, 0, 900, 1280); // --> For testing purpose only. // Where 9999999 is a new record score for example. Here's the image shown as example. I add it some red grid so that I could check if the display of score when updated will always display on center no matter how many digits takes place in. However, it is fixed, so I have to figure it out how to display it automatically on center regardless of the number of digits while updating for the new highscore. I have used the LibGDX preferences very well though to save and load records for the highscore.

    Read the article

  • Google Analytics: How long does it take users to trigger an event

    - by Stephen Ostermiller
    I implemented Google Analytics event tracking on my currency conversion website. The typical user flow is: User lands on a page about two currencies. User enters an amount to be converted. The site shows the user the value in the other currency. The JavaScript sends Google Analytics an "converted" event when the currency conversion is done. Because most of the sessions on my site are single page, the event tracking is very important to me to be able to know if users find my page useful. I'm looking for a way to be able to figure out how long it typically takes users to enter a value in the form. I expect that this data would form a bell curve with around a specific amount of time after page load. If I can't get a graph, I could make do with a median value. I would like to be able to use this as a core metric around usability testing. Is there a way to get this information out of Google Analytics?

    Read the article

  • Ubuntu 12.04 updates fail recently - Please help

    - by user74152
    I upgraded Ubuntu 11.10 to 12.04 LTS immediately after its release (april 2012). Since then updates (new kernels and others) succeeded regularly, but recently, suddenly, updates fail consistently. What causes the problem and how can it be solved? Terminal information after the last update attempt: ariel@ariel-MS-7592:~$ sudo apt-get upgrade Reading package lists... Done Building dependency tree Reading state information... Done 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 3 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? y Setting up linux-image-3.2.0-26-generic (3.2.0-26.41) ... Running depmod. update-initramfs: deferring update (hook will be called later) Examining /etc/kernel/postinst.d. run-parts: executing /etc/kernel/postinst.d/dkms 3.2.0-26-generic /boot/vmlinuz-3.2.0-26-generic run-parts: executing /etc/kernel/postinst.d/initramfs-tools 3.2.0-26-generic /boot/vmlinuz-3.2.0-26-generic update-initramfs: Generating /boot/initrd.img-3.2.0-26-generic run-parts: executing /etc/kernel/postinst.d/pm-utils 3.2.0-26-generic /boot/vmlinuz-3.2.0-26-generic run-parts: executing /etc/kernel/postinst.d/update-notifier 3.2.0-26-generic /boot/vmlinuz-3.2.0-26-generic run-parts: executing /etc/kernel/postinst.d/zz-update-grub 3.2.0-26-generic /boot/vmlinuz-3.2.0-26-generic /usr/sbin/grub-mkconfig: 11: /etc/default/grub: splash”: not found run-parts: /etc/kernel/postinst.d/zz-update-grub exited with return code 127 Failed to process /etc/kernel/postinst.d at /var/lib/dpkg/info/linux-image-3.2.0-26-generic.postinst line 1010. dpkg: error processing linux-image-3.2.0-26-generic (--configure): subprocess installed post-installation script returned error exit status 2 No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of linux-image-generic: linux-image-generic depends on linux-image-3.2.0-26-generic; however: Package linux-image-3.2.0-26-generic is not configured yet. dpkg: error processing linux-image-generic (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of linux-generic: linux-generic depends on linux-image-generic (= 3.2.0.26.28); however: Package linux-image-generic is not configured yet. dpkg: error processing linux-generic (--configure): dependency problems - leaving unconfiguredNo apport report written because MaxReports is reached already No apport report written because MaxReports is reached already Errors were encountered while processing: linux-image-3.2.0-26-generic linux-image-generic linux-generic E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Live-Ubuntu 12.04 ran fine, now stopped booting!

    - by user89743
    I've seen similar problems to this several times in the forum, but mine is a bit different, so the other posts I saw were no help to me. When I boot Ubuntu 12.04 64-bit from live-SD-card (3GB persistence) I suddenly get this error: (initramfs) mount: mounting /dev/loop0 on //filesystem.squashfs failed: Invalid argument Can not mount /dev/loop/0 (/cdrom/casper/filesystem.squashfs) on //filesystem.squashfs (it says I can type "help" for commands, but I don't know anything about how to go from there, totally new to linux) The reason I say my case is different is because my Ubuntu worked fine for over a week, even pretty fast, and now this problem happened. Before that I used to run my live ubuntu from USB sticks but that was slower (especially when booting which took 15 minutes from USB stick!). Also I kept getting the same above problem after a while when booting and had to re-create a live USB linux several times. Installing on harddrive is not an option because my harddrive has physical damage and getting a replacement will take a while, therefore I can only use live-USB or live-SD-card Ubuntu. As I said I used Ubuntu without problems for more than a week, before that as well for several weeks on USB sticks, but the above problem occured sooner or later. This time I paid attention to when it happened: I was rebooting my computer (HP 620 laptop, 4 GB RAM, 64 bit system) from SD flash card and when I was booting I selected F6 and then the first option "no acpi" or something like that...I had used it before and noticed it slowed down the time it took Linux to use. This time it caused this error. Now even when I boot normally/default I get this error. Now I'm accessing Ubuntu from my USB stick without persistence file, when I check my SD card, all the files mentioned in the error message are there and the filesystem.squashfs is 691.2 MB so nothing seems to have been deleted by accident. (I have already made many changes/downloaded programs to my SD card persistent Ubuntu and would hope to loose them, since downloading is expensive for me, and since the problem seems to re-occur...) Can anyone help me, preferably without having to create another startup disk on my SD card? I'm totally new to this. Sorry for the long posts, just didn't know what info is relevant and what isnt! Kon

    Read the article

  • Rebuild an existing Rackspace server from scratch?

    - by Mojo
    In the process of working out kinks in a server build, is it possible to re-bootstrap a server from scratch, image and all? (Same flavor, say.) By that I mean without recreating the server, keeping its IP address if nothing else. I can't find a way to do this. It would have some advantages, I should think: It wouldn't decrement the 'server create' quota. The existing server would keep its IP address. One machine of a cluster could be rebuilt to a new image without having to change the IP address. (Maybe load balancers make IP addresses a moot point, but it still seems like a worthwhile task.)

    Read the article

  • Weblogic Threads Usage

    - by Hila
    I have an application deployed on WebLogic 10.3, which exhibits a strange behavior. I am running a constant (not too high) load on my application (20 concurrent users, running a light activity). The response time is reasonable (well below 100ms after the application stabilizes) Memory consumption seems fine (My application creates a lot of short-living objects, but they are garbaged collected so the overall memory consumption stays under 500 mb). Threads stats seem healthy as well: And yet, after I leave my test running for a while, more and more execute threads ("[ACTIVE] ExecuteThread: '3' for queue: 'weblogic.kernel.Default (self-tuning)'") are created, until eventually the application crashes: This test hasn't been running for a long time (All the new threads that you don't see in the first screenshot were created while I was writing this question), and I've seen much more threads being created. Any idea why these threads are being created?

    Read the article

< Previous Page | 695 696 697 698 699 700 701 702 703 704 705 706  | Next Page >