Search Results

Search found 13120 results on 525 pages for 'business layer'.

Page 455/525 | < Previous Page | 451 452 453 454 455 456 457 458 459 460 461 462  | Next Page >

  • Testing Hibernate DAO, without building the universe around it.

    - by Varun Mehta
    We have an application built using spring/Hibernate/MySQL, now we want to test the DAO layer, but here are a few shortcomings we face. Consider the use case of multiple objects connected to one another, eg: Book has Pages. The Page object cannot exist without the Book as book_id is mandatory FK in Page. For testing a Page I have to create a Book. This simple usecase is easy to manage, but if you start building a Library, till you don't create the whole universe surrounding the Book and Page, you cannot test it! So to test Page; Create Library Create Section Create Genre Create Author Create Book Create Page Now test Page. Is there an easy way to by pass this "universe creation" and just test he page object in isolation. I also want to be able to test HQLs related to Page. eg: SELECT new com.test.BookPage (book.id, page.name) FROM Book book, Page page. JUnit is supposed to run in isolation, so I have to write the whole test case to create the Page. Any tips will be useful.

    Read the article

  • Should Service Depend on Many Repositories, or Break Them Up?

    - by Josh Pollard
    I'm using a repository pattern for my data access. So I basically have a repository per table/class. My UI currently uses service classes to actually get things done, and these service classes wrap, and therefore depend on repositories. In many cases my services are only dependent upon one or two repositories, so things aren't too crazy. Unfortunately, one of my forms in the UI expects the user to enter data that will span five different tables. For this form I made a single service class that depends upon five repositories. Then the methods within the service for saving and loading the data call the appropriate methods on all of the corresponding repositories. As you can imagine, the save and load methods in this service are really big. Also, unit testing this service is getting really difficult because I have to setup so many fake repositories. Would it have been a better choice to break this single service apart into a few smaller services? It would put more code at the UI layer, but would make the services smaller and more testable.

    Read the article

  • Improving the efficiency of multiple concurrent Core Animation animations

    - by Alex
    I have a view in my app that is very similar to the month view in the built-in Calendar app. There's a subview that holds the individual cells (a custom UIView subclass that draws text into its layer), and when the user navigates to the next "month", I create the new cells and slide the view to show them. When the animation stops, I remove the old, hidden cells and set things up so it's ready to go for the next animation. This all works nicely. However, I'd like to animate the cells' text color, as in the Calendar app, so that the outgoing ones transition to a lighter color and the incoming ones transition to a darker color. The problems is that I can have as many as 70 cells, so doing individual animations is very slow -- between 5-10 fps on my iPhone 3GS. I'm trying to find a less computationally intense way of doing this. My reading of the Shark results is that the majority of the time is spent redrawing the text for each frame for each frame. This makes sense, since text rendering is hardly the cheapest operation. I've considered creating a second view -- one holding the "outgoing" state and one holding the "incoming" state and using a single opacity animation to gradually reveal the updated cells while both are sliding. I'm concerned that instead of having 70 cells, I'll have 140, which seems like a lot of views. So, is that too many views or would there be a better way of doing this?

    Read the article

  • More efficient way of writing this javascript

    - by nblackburn
    I am creating a contact form for my website and and using javascript to the first layer of validation before submitting it which is then checked again via php but i am relatively new to javascript, here is my script... $("#send").click(function() { var fullname = $("input#fullname").val(); var email = $("input#email").val(); var subject = $("input#subject").val(); var message = $("textarea#message").val(); if (fullname == ""){ $("input#fullname").css("background","#d02624"); $("input#fullname").css("color","#121212"); }else{ $("input#fullname").css("background","#121212"); $("input#fullname").css("color","#5c5c5c"); } if (email == ""){ $("input#email").css("background","#d02624"); $("input#email").css("color","#121212"); }else{ $("input#email").css("background","#121212"); $("input#email").css("color","#5c5c5c"); } if (subject == ""){ $("input#subject").css("background","#d02624"); $("input#subject").css("color","#121212"); }else{ $("input#subject").css("background","#121212"); $("input#subject").css("color","#5c5c5c"); } if (message == ""){ $("textarea#message").css("background","#d02624"); $("textarea#message").css("color","#121212"); }else{ $("textarea#message").css("background","#121212"); $("textarea#message").css("color","#5c5c5c"); } if (name && email && subject && message != ""){ alert("YAY"); } }); How can i write this more efficiently and make the alert show if all the fields are filled out, thanks.

    Read the article

  • Dependency Injection: How to pass DB around?

    - by Stephane
    Edit: This is a conceptual question first and foremost. I can make applications work without knowing this, but I'm trying to learn the concept. I've seen lots of videos with related classes and that makes sense, but when it comes to classes wrapping around other classes, I can't seem to grasp where things should be instantiated/passed around. =-=-=-=-=-=-= Question: Let's say I have a simple page that loads data from a table, manipulates the result and displays it. Simple. I'm going to use '=' for instantiating a class and '-' for passing a class in using constructor injection. It seems to me that the database has to be passed from one end of the application to the other which doesn't seem right. Here's how I would do it if I wanted to separate concerns: index =>Controller =>Model Layer =>Database =>DAO->Database I have this rule in my head that says I'm not supposed to create objects inside other objects. So what do I do with the Database? Or even the Model for that matter? I'm obviously missing something so basic about this. I would love a simplified example so that I can move forward in my code. I feel really hamstrung by this.

    Read the article

  • Iterating Through N Level Children

    - by bobber205
    This seems like something neat that might be "built into" jQuery but I think it's still worth asking. I have a problem where that can easily be solved by iterating through all the children of a element. I've recently discovered I need to account for the cases where I would need to do a level or two deeper than the "1 level" (just calling .children() once) I am currently doing. jQuery.each(divToLookAt.children(), function(index, element) { //do stuff } ); This is what I'm current doing. To go a second layer deep, I run another loop after doing stuff code for each element. jQuery.each(divToLookAt.children(), function(index, element) { //do stuff jQuery.each(jQuery(element).children(), function(indexLevelTwo, elementLevelTwo) { //do stuff } ); } ); If I want to go yet another level deep, I have to do this all over again. This is clearly not good. I'd love to declare a "level" variable and then have it all take care of. Anyone have any ideas for a clean efficient jQueryish solution? Thanks!

    Read the article

  • Wrong class type in objective C

    - by Max Hui
    I have a parent class and a child class. GameObjectBase (parent) GameObjectPlayer(child). When I override a method in Child class and call it using [myPlayerClass showNextFrame] It is calling the parent class one. It turns out in the debugger, I see the myPlayerClass was indeed class type GameObjectBase (which is the parent class) How come? GameObjectBase.h #import <Foundation/Foundation.h> #import "cocos2d.h" @class GameLayer; @interface GameObjectBase : NSObject { /* CCSprite *gameObjectSprite; // Sprite representing this game object GameLayer *parentGameLayer; */ // Reference of the game layer this object // belongs to } @property (nonatomic, assign) CCSprite *gameObjectSprite; @property (nonatomic, assign) GameLayer *parentGameLayer; // Class method. Autorelease + (id) initWithGameLayer:(GameLayer *) gamelayer imageFileName:(NSString *) fileName; // "Virtual methods" that the derived class should implement. // If not implemented, this method will be called and Assert game - (void) update: (ccTime) dt; - (void) showNextFrame; @end GameObjectPlayer.h #import <Foundation/Foundation.h> #import "GameObjectBase.h" @interface GameObjectPlayer : GameObjectBase { int direction; } @property (nonatomic) int direction; @end GameLayer.h #import "cocos2d.h" #import "GameObjectPlayer.h" @interface GameLayer : CCLayer { } // returns a CCScene that contains the GameLayer as the only child +(CCScene *) scene; @property (nonatomic, strong) GameObjectPlayer *player; @end When I call examine in debugger what type "temp" is in this function inside GameLayer class, it's giving parent class GameObjectBase instead of subclass GameObjectPlayer - (void) update:(ccTime) dt { GameObjectPlayer *temp = _player; [temp showNextFrame]; }

    Read the article

  • RouterOS on Hyper-V (v3/2012) - any way to get it working?

    - by TomTom
    Trying to set up a small VPN point to connect into a remote Hyper-V cluster using ROuterOS. Anyone got it working ON Hyper-V with the latest builds of RouterOS? It seems the legacy network adapter is not supported anymore either (or just broken). The platform is a Windows Server 2012 RC. This is not a high performance setup - the RouterOS wont do the routing for more than the backend administrative access, and the only real traffic we will see there is when ISO images for new operating systems are uploaded. Otherwise we will have possibly RDP traffic as well as web / http traffioc, but this is internal only (dashboards, some control panel). The server has no public business. So the price for non virtualized network cards is ok for me. After hooking up - ping just does not work. After some time I see in windows (arp -a on the command line), so I know that the Hyper-V side is set up properly. Just no packets arrived. I have turned off all protection on Hyper-V (or : not turned them on), so no MAC spoofing protection etc. in the Advanced page for the legacy adapters. Unless I can get it work I will have to resort to using a windows install as router / VPN endpoint, which introduces another OS into the fabric (we run all routers etc. so far on mikrotik in hardware, which is why I want this one to be RouterOS, too). And no, putting hardware there is NOT an option - the cost would be significant.

    Read the article

  • SCOM, Server 2008 and SQL Server 2008

    - by Jacques
    Hi there, I'm trying to setup SCOM(System Center Operations Manager 2007 (SCOM) – Platform Monitoring) on my Server 2008 machine using SQL Server 2008 running on the same machine. When I check my prerequisites I get problem on SQL and Active Directory components. (I'm running SQL server 2008 and Server 2008 with active directory not installed) Errors: 1.Microsoft SQL Server 2005 Service Pack 1 required. Details: SQL Server 2005 SP1 is the next version of SQL Server. SQL Server 2005 Enterprise Edition, is a complete data and analysis platform for large mission-critical business applications. The link provided in the resolution column is a trial version of the product and is not supported by the Microsoft SQL Server team In order to install active directory needs to be present. Details:Setup failed to verify the presence of Active Directory for this server. I've got a couple of questions I need answering, hope someone can help. Do I need to install Active Directory for SCOM to work? Can I run SCOM with an SQL 2008 Database? How do I get pass these problems?

    Read the article

  • Kodak Scanner Software Issue 0xc0000005

    - by Andrew
    I am trying to be able to use a program called Kodak Smart Touch, but just today I have experienced an error preventing it from opening. The scanner is a business class scanner, an i2900. It comes with a small program called Kodak Smart Touch. This program allows you to select and customize a number of presets on the front panel and automatically create/save scanned files, etc. I was running a backup with Aomei Backupper at the time, and I wanted to scan a file. I tried to scan as normal, but right at the end the program quit with an error. Now, whenever I try to start Smart Touch, even not running the backup and with Aomei uninstalled, the following error pops up: Title of message: KSSCFG (version 1.7.15.121102) has stopped working. Body: See (path name) for more information. When I go to the path in the error message, there are two files, ERRORLOG.txt and KSSCFG.dmp. The error log says, "KSSCFG (version 1.7.15.322.121102) case an Access Violation(0xc0000005). Error occur at 2014/06/11 21:37:54 Operation System: Windows 7 64bits (6.1.7601) processor(s) type AMD x86 or x64" The dump file can be downloaded here: http://l.bitcasa.com/yzbtv1VV Opening it with a hex editor yields very little of use, but I do have an internal commandline utility that may help: http://l.bitcasa.com/cTXvAsst I've already uninstalled and reinstalled all the kodak software several times to no avail. What is really odd is that the TWAIN and ISIS drivers work fine through another scanning program. I am at my wit's end, and I have no idea what to do next. Thanks.

    Read the article

  • Failed Backup Job With Backup Exec 12 and AOFO

    - by Mort
    I am backing up a Windows 2003 Small Business Server with SP2. We are running Backup Exec 12 with SP4. Recently the backup job started failing on backing up the system state with the following error: V-79-57344-34110 - AOFO: Initialization failure on: "System?State". Advanced Open File Option used: Microsoft Volume Shadow Copy Service (VSS). Snapshot provider error (0xE000FE7D): Access is denied. To back up or restore System State, administrator privileges are required. Check the Windows Event Viewer for details. Upon review of Symantec's website the error indicates a credential problem. However when I test the credentials they come back with no failures. I have found another forum here referencing a similar error and have tried what has been indicated with no succesful results. I have created new jobs based on new selection lists with no succesful results. I suspect a new update possibly from Microsoft may be causing this but I have no idea which one. I am looking for feedback. Thanks.

    Read the article

  • untrusted (self-sign) certificate on android browser

    - by Basiclife
    Hi all, Apologies for the brevity of this question but due to an unfortunate series of events, I've managed to brick my PC so am posting from my phone... We've just set up Windows Small Business Server 2008 at work which has an external web portal accessible via HTTPS. We haven't yet bought?installed any certificates. The portal provides access to email, sharepoint, remote desktop, etc.... (I'm aware some of these are never going to work on the phone) From firefox / other desktop browsers, this displays an "untrusted cert' warning which I can choose to ignore. When browsing from my mobile I get a popup notification which says. "A secure connection could not be established" when I OK this (my only option) I see the standard android-generated "unable to load page - has it moved?" Page. Does anyone know of a way to either accept the certificate temporarily or allow untrusted certificates generally? I'm aware that the latter option is non-ideal in the mid to long term but at the moment, I need to access the portal and am willing to either toggle settings as/when required or forego using the mobile for banking, etc... to mitigate my risk. Thanks in advance for any help you can provide and apologies again for brevity In case it helps I'm on the G1 running android 1.6 using the default browser

    Read the article

  • Windows 7 on a 64-bit computer

    - by GetFree
    I read on Wikipedia that Windows 7 on a 64-bit PC needs twice as much RAM as on a 32-bit PC. I understand why is that: every number stored in memory takes 8 bytes rather than just 4. That, in simple terms, means that your amount of RAM is reduced to half when you use Windows 7 on a 64-bit computer. Now, I have a Intel Core 2 Duo Laptop with Windows Vista right now (2 GB of RAM). My question is: Since Core 2 is a 64-bit architecture, if I upgrade to Windows 7 will my laptop be working as if it had just 1 GB of RAM? Or... to say it in other words: Having a 64-bit PC with Windows 7 do you need twice as much RAM as you need on a 32-bit PC to have the same performance? If I am right, then I'd say it's a terrible business to have a 64-bit computer and Windows 7 on it (I hope I am mistaken, though). Follow-up: After some answers, I'm realizing it's not the same thing to have a 32-bit OS on a 64-bit PC than a 64-bit OS on a 64-bit PC. Apparently, the problem of Windows 7 requiring twice as much RAM on 64-bit architectures is when you have both the OS and PC supporting 64 bits. I'd like new answers to address this issue. Also, is it possible to have more that 4 GB of RAM on a 64-bit PC using a 32-bit version of Windows?

    Read the article

  • Ideal Bacula appliance?

    - by Ricket
    I'm an intern at a small company and we (the IT department of two) manage <100 client computers and a handful of servers. Currently we're using a company's appliance to handle backup; it does a small backup every night and a full backup every weekend, and a guy comes on Wednesday to take an offsite backup drive (and gives back last week's drive to swap with it). Lately this system, mainly the appliance, has been having problems, so we are looking for an alternative. I'm researching other companies but also looking into what we might expect from trying to do this ourselves. There will undoubtedly be a large learning curve, but hey, that's what serverfault is for, right? :) So anyway I was looking at Bacula. Feature list sounds great, documentation is plentiful, but it's only software. So my question is, what is the ideal backup server to run the Bacula server software on? And not only the server but other related appliances. Our current backup appliance uses only hard drives, not tape drives. It has several plugged into it at one time, in hotswap bays on the front of the machine. I couldn't help but notice though, it's hardly more than Windows XP with hard drive bays, a PCI eSATA card (which connects to another appliance extension piece with 2 more bays), and their software. Since the company will take back their appliance if/when we cancel with them, where can I go to configure a server with these kinds of things? Maybe I'm being naive, I'm sure Dell (and any other computer company) sells them in the small business section of their website, but I wanted to make sure that there's not some other more recommended place that other companies are getting their hardware from, and that I don't need anything special for Bacula.

    Read the article

  • The boot selection failed because a required device is inaccessible 0xc000000e

    - by bbodenmiller
    A family member of mine recently went on vacation and turned off their computer, something they normally do not do, upon returning home it would not turn on and now returns the error message below. Generally friends and family come to me for help with computers and I have no problem, however this time I am a bit stumped. Any suggestions would be greatly appreciated. As you can see the error message is: Status: 0xc000000e Info: The best selection failed because a required device is inaccessible. Before going to this error message it briefly flashes the Windows loading screen. I have been able to confirm through the Windows RE Command Line and the dir command that the C: drive is accessible and likely is just suffering a bootup issue. I have tried: Launching the repair process discussed in the error message three times however each time it requires a restart and then returns to the same error message. Changing the boot order to be hard drive first Getting into safe mode; F8 just results in the same error message before I can get to the menu to select safe mode I have checked to make sure the BCD (bcdedit, Boot Configuration Data) is still intact as per https://www.symantec.com/business/support/index?page=content&id=TECH160475 I plan to try (but would like additional comments on): sfc /scannow; requires a restart and thus will likely result in the error message again A memory scan Bootrec as per http://support.microsoft.com/kb/927392#method1 Swapping IDE cables/ports Resetting the BIOS I noticed others with similar issues around the web are dual-booting however this machine is not setup in a dual-boot environment. Additionally at one point this error message supposedly showed up before I started working on the computer: The instruction at 0xfbe2584d referenced memory at 0x00000008. The memory could not be read. As previously stated any additional suggestions or words of advice would be greatly apprecaited.

    Read the article

  • Static IP for dynamic IP

    - by scape279
    I have a dynamic IP address. I would like to have a static IP, but Virgin Media don't allow static IPs for residential broadband services, even if you ask them really nicely and offer to pay for it without switching to a business tariff. I am already registered with a dynamic DNS service which is updated by my router eg me.example.com will always resolve to my dynamic IP. This is fine for some circumstances, but not if you can only enter an IP address into configuration files/hardware etc like firewalls, subversion services etc etc. Is there a way I can have a static IP address 'forwarding' to my dynamic IP? Would a possible solution involve tunnelling? Setting up a private proxy? Please note the following: I am able to buy an IP address from my web host. I have access to a webserver and I am able to create custom DNS zones. I'm happy to have a webserver running at home if necessary also. I do not wish to change broadband providers. I have zero control over the services that require the IP address entering so I cannot tackle the problem that way round (services I need to access are at work). PS I've tried googling this issue, but it is very difficult to search for as most results are related to dynamic dns (which I already have set up and isnt quite what I'm after)

    Read the article

  • Ubuntu software stack to mimic Active Directory auth

    - by WickedGrey
    I'm going to have an Ubuntu 11.10 box in a customer's data center running a custom webapp. The customer will not have ssh access to the box, but will need authentication and authorization to access the webapp. The customer needs to have the option of either pointing the webapp at something that we've installed locally on the machine, or to use an Active Directory server that they have. I plan on using a standard "users belong to groups; groups have sets of permissions; the webapp requires certain permissions to respond" auth setup. What software stack can I install locally that will allow an easy switch to and from an Active Directory server, while keeping the configuration as simple as possible (both for me and the end customer)? I would like to use as much off-the-shelf software for this as possible; I do not want to be in the business of keeping user passwords secure. I could see handling the user/group/permission relationships myself if there is not a good out-of-the-box solution (but that seems highly unlikely). I will accept answers in the form of links to "here is what you need" pages, but not "here is what Kerberos does" unless that page also tells me if it's required for my use case (essentially, I know that AD can speak Kerberos, but I can't tell if I need it to, or if I can just use LDAP, or...).

    Read the article

  • Dell Management Packs in System Center Operations Manager 2007 R2?

    - by bwerks
    Hey all, I recently set up SCOM in a small business network environment. The root management server is a Dell Poweredge 2950, and I'd like to use SCOM to monitor it using Dell's management packs. I've imported the management packs into the SCOM deployment and followed Dell's installation instructions, but it doesn't seem to be fully working yet. Currently, the Diagram views in the Dell tree (Monitoring tab) seem to show me the server's place in the network topology, so it seems that at least part of it is working. However, none of the reports under "Performance and Power Monitoring Views" provide any information. When clicking on one of them (Power Consumption (Watts), for instance), the display area is blank and there is a tooltip visible that reads "No performance counter is selected. To select a counter, place a check mark in the Show column in legend below." However, in the legend, there's nothing there for me to check. I've installed OpenManage 6.2 on the server as per the Dell documentation, but I don't know what else I could have done that I missed. Does this sound like a familiar problem to anyone?

    Read the article

  • How to update-grub on a system running overlayroot?

    - by mikepurvis
    We ship boxes configured with overlayroot, using the following overlayroot.conf: overlayroot=device:dev=/dev/sda6,timeout=20,recurse=0 Which produces the following mount configuration: $ mount overlayroot on / type overlayfs (rw,errors=remount-ro) /dev/sda5 on /media/root-ro type ext3 (ro,relatime,errors=continue,user_xattr,acl,barrier=1,data=ordered) /dev/sda6 on /media/root-rw type ext3 (rw,relatime,errors=continue,user_xattr,acl,barrier=1,data=ordered) /dev/sda1 on /boot type ext3 (rw) As you can see, three key physical partitions: sda1 is /boot, sda5 is a read-only "factory" root, and sda6 is a "user" root which can be wiped at any point to restore the machine to its original factory state. Now, the problem arises when update-grub is run for any reason: $ sudo update-grub [sudo] password for administrator: /usr/sbin/grub-probe: error: cannot find a device for / (is /dev mounted?). Understandable, since / is an overlayfs. The contents of /usr/sbin/update-grub are: #!/bin/sh set -e exec grub-mkconfig -o /boot/grub/grub.cfg "$@" With /usr/sbin/grub-mkconfig being the business part of things. But the actual problem is in /usr/sbin/grub-probe, called by grub-mkconfig, and grub-probe is a binary. So my question is, is there a parameter or whatever which can make grub-probe do the right thing in the face of / being an overlayfs? And secondly, is there a way to hack/patch that in so that the update-grub script just does the right thing? Thanks.

    Read the article

  • Self-hosting vs. Budget hosting - What are the economics?

    - by cdonner
    My current hosting provider (shared Linux, unlimited domains, < $10 per month, with about 20 sites) has been giving me a lot of grief lately. I am contemplating to just ditch them and repurpose the old Sun V20z that is sitting in my basement rack, and move the hosting in-house, literally. My math goes as follows: my company pays up to $80 a months for my home internet service, which would cover the upgrade from currently Fios to Comcast business internet with 5 static IPs. So this comes free. running the server will cost me about $180/year at the current rate of approx. $.2/kWh my time is free So, it seems that the my net cost of doing this would be about $80 anually, plus the work that goes into setup and maintenance. I will have to get email hosting somewhere, which I do not want to do myself. On the other side of the balance sheet, I'd likely get better uptime than my provider based on recent stats, will not get suspended and don't have to spend hours with customer support. Overall, I am not convinced. Has anybody actually done that? What was your experience, and did it pay off?

    Read the article

  • RAID1 Broken Mirroring

    - by Sanoj
    I'm having a little server with Windows Small Business Server 2003. I'm using RAID1, via a HighPoint Rocket RAID 1640 RAID-card, using two harddrives. This week the server alarmed, and durig reboot I got the error-message Broken Mirroring (User Manual page 30). I had a few alternatives (see the manual), first I tried Continue, but the server restarted during boot. Next time I took Power Off, and replaced the oldest harddrive with a new one, and when I booted, I selected Rebuild. Then I selected the new harddrive to be the new one. The rebuild-procedure started and a progress bar at 0% showed up, but after a few seconds I got the message Copy Failed!, then the server booted and Windows Server started. Now it works fine. But I guess that I'm just using one harddrive now, and it's not mirrored. I haven't touched the server since then (two days ago). What should I do now? I have no experience of this situation. Anyone that have some guidance?

    Read the article

  • Printer spooler spoolsv.exe crashes

    - by MattiasSN
    We have a problem with a Windows 7 print spooler. There is a Windows 2011 Small Business Server running as print server and 2 computers in the network their print spooler keeps crashing at random. The log files says it is ntdll.dll that has a fault. Naam van toepassing met fout: spoolsv.exe, versie: 6.1.7601.17514, tijdstempel: 0x4ce7b4e7 Naam van module met fout: ntdll.dll, versie: 6.1.7601.17725, tijdstempel: 0x4ec4aa8e Uitzonderingscode: 0xc0000374 Foutoffset: 0x00000000000c40f2 Id van proces met fout: 0x55c Starttijd van toepassing met fout: 0x01cd9db324904eb1 Pad naar toepassing met fout: C:\Windows\System32\spoolsv.exe Pad naar module met fout: C:\Windows\SYSTEM32\ntdll.dll Rapport-id: 8789af0b-09a6-11e2-9d78-001c25237c45 The print spooler on the server keeps running and works fine. We can also print from other computers. But on two computers the print spooler crashes. Sometimes it crashes after a user is logged in, but it also happened multiple times after a print job. After each crash we get the same ntdll.dll error. Hopefully someone can help me with this problem. If you need more information, don't hesitate to ask.

    Read the article

  • Why the system information message when accessing an Ubuntu server doesn't match free -m?

    - by Andres
    Each time I SSH into my AWS Ubuntu servers I see a system information message, showing load, memory usage and packages available to install, like this: Welcome to Ubuntu 12.04.3 LTS (GNU/Linux 3.2.0-51-virtual x86_64) * Documentation: https://help.ubuntu.com/ System information as of Sun Nov 10 18:06:43 EST 2013 System load: 0.08 Processes: 127 Usage of /: 4.9% of 98.43GB Users logged in: 1 Memory usage: 69% IP address for eth0: 10.236.136.233 Swap usage: 100% Graph this data and manage this system at https://landscape.canonical.com/ 13 packages can be updated. 0 updates are security updates. Get cloud support with Ubuntu Advantage Cloud Guest http://www.ubuntu.com/business/services/cloud Use Juju to deploy your cloud instances and workloads. https://juju.ubuntu.com/#cloud-precise *** /dev/xvda1 will be checked for errors at next reboot *** *** System restart required *** My question is about the memory percentage shown. In this case, it's showing a 69% of memory usage, but since the swap usage was 100% I checked it by myself. So when I run free -m I get this: total used free shared buffers cached Mem: 1652 1635 17 0 4 29 -/+ buffers/cache: 1601 51 Swap: 895 895 0 And that's of course closer to 100% than to 69%

    Read the article

  • Can VMWare Server 2.0 be useful in Production for easing backups?

    - by Keith Sirmons
    Howdy, Let's run this idea by the group here. I am thinking about using VMWare Server in production to host a 2008 Domain Controller with DHCP and DNS, a 2008 member server with WSUS, some virus software, and other "management" utilities a second 2008 member server with SQL, IIS, and File Shares for a medium business of 50-100 desktops. The reason I am leaning toward Server vs ESXi is for backup purposes. Using ESXi, if I want to backup the VM's, I would need a second server in the office with enough storage availability to hold a copy of the vmdks. I am wondering if putting this virtual environment on top of a basic 2008 server install will allow for easier backups to both tape and/or to offsite storage using JungleDisk. Can a snapshot be triggered easily via a scheduled job? I know this doesn't necessarily handle file level restores, but I want to make sure in a DR situation, we can restore production servers quickly. Does this concept hold water? Would a very minimum install of the 2008 Host remove too many resources from the actual production machines? This would be a new Dell 410 server with 12 GB ram and (6) 600 GB 15K in a RAID 6, Dual Intel Xeon 2.26GHz procs.

    Read the article

  • Why does bash sometimes think my $HOME isn't the correct directory?

    - by Adam Yanalunas
    Like the title says it seems that bash sometimes misidentifies my $HOME. This cropped up after a seemingly unique series of events that I will now replay in broad strokes. Running OS X 10.6 with normal, local account Work binds my account to Active Directory Much time passes with no issues Set up rvm to manage Ruby installs (this becomes important later) Upgraded to OS X 10.7 a few days ago After successful install, attempted to log in, was presented with "Must reset password" dialog that never allowed a password to be reset. Would simply shake the box after new password was entered. Much googling was done. Much more googling was done. Swearing was had. Logged in as root, created new account, set as admin, deleted /Users/[new account], renamed /Users/[old account] to /Users/[new account] Logged out of root, logged into new account with no issues After OS X asking for a my account password a few times to update Keychain and other system-level stuff it was back to business as usual. Opened Terminal, cd to project folder, tried "rails server" and was presented with: /usr/local/lib/ruby/1.9.1/rubygems/dependency.rb:247:in to_specs': Could not find rails (>= 0) amongst [] (Gem::LoadError) from /usr/local/lib/ruby/1.9.1/rubygems/dependency.rb:256:into_spec' from /usr/local/lib/ruby/1.9.1/rubygems.rb:1210:in gem' from /usr/local/bin/rails:18:in' Ran through a few exercises, decided to rm -rf ~/.rvm and reinstall. Running a --trace on the rvm installer shows it dies on this line: mkdir: /Users/[old account]: Permission denied Scrolling back through the --trace log I see many more mentions of /Users/[old account]. When inspect the install script the offending line is looking at "${HOME}/.rvm" as it tries to run the mkdir. To my confusion I also see mentions of /Users/[new account] in the log. I've tried exporting a new HOME in my .bash_profile to no luck. Can anyone guess why /Users/[old account] would still be kicking around?

    Read the article

< Previous Page | 451 452 453 454 455 456 457 458 459 460 461 462  | Next Page >