Search Results

Search found 2109 results on 85 pages for 'it depends'.

Page 72/85 | < Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >

  • Hibernate 1:M relationship ,row order, constant values table and concurrency

    - by EugeneP
    table A and B need to have 1:M relationship a and b are added during application runtime, so A created, then say 4 B's created. Each B instance has to come in order, so that I could later extract them in the same order as I added them. The app will be a web-app running on Tomcat, so 10 instances may work simultaneously. So my question are: 1) How to preserve inserting order, so that I could extract B instances that A references in the same order as I persisted them. That's tricky, because we add to a Collection and then it gets saved (am I right?). So, it depends on how Hibernate saves it, what if it changes the order in what we added instances? I've seen something like LIST instead of SET when describing relationships, is that what I need? 2) How to add a 3-rd column to B so that I could differentiate the instances, something like SEX(M,F,U) in B table. Do I need a special table, or there's and easy way to describe constants in Hibernate. What do you recommend? 3) Talking about concurrency, what methods do you recommend to use? There should be no collisions in the db and as you see, there might easily be some if rows are not inserted (PK added) right where it is invoked without delays ?

    Read the article

  • .net remoting - Better solution to wait for a service to initialize ?

    - by CitizenInsane
    Context I have a client application (which i cannot modify, i.e. i only have the binary) that needs to run from time to time external commands that depends on a resource which is very long to initialize (about 20s). I thus decided to initialize this resource once for all in a "CommandServer.exe" application (single instance in the system tray) and let my client application call an intermediate "ExecuteCommand.exe" program that uses .net remoting to perform the operation on the server. The "ExecuteCommand.exe" is in charge for starting the server on first call and then leave it alive to speed up further commands. The service: public interface IMyService { void ExecuteCommand(string[] args); } The "CommandServer.exe" (using WindowsFormsApplicationBase for single instance management + user friendly splash screen during resource initializations): private void onStartupFirstInstance(object sender, StartupEventArgs e) { // Register communication channel channel = new TcpServerChannel("CommandServerChannel", 8234); ChannelServices.RegisterChannel(channel, false); // Register service var resource = veryLongToInitialize(); service = new MyServiceImpl(resource); RemotingServices.Marshal(service, "CommandServer"); // Create icon in system tray notifyIcon = new NotifyIcon(); ... } The intermediate "ExecuteCommand.exe": static void Main(string[] args) { startCommandServerIfRequired(); var channel = new TcpClientChannel(); ChannelServices.RegisterChannel(channel, false); var service = (IMyService)Activator.GetObject(typeof(IMyService), "tcp://localhost:8234/CommandServer"); service.RunCommand(args); } Problem As the server is very long to start (about 20s to initialize the required resources), the "ExecuteCommand.exe" fails on service.RunCommand(args) line because the server is yet not available. Question Is there a elegant way I can tune the delay before to receive "service not available" when calling service.RunCommand ? NB1: Currently I'm working around the issue by adding a mutex in server to indicate for complete initiliazation and have "ExecuteCommand.exe" to wait for this mutex before to call service.RunCommand. NB2: I have no background with .net remoting, nor WCF which is recommended replacer. (I chose .net remoting because this looked easier to set-up for this single shot issue in running external commands).

    Read the article

  • I can not navigate to the next page in windows phone 7

    - by Vijay
    I am trying to navigate form one page to another page depends upon the login. If already user logged in, Welcome page should open. Else Log in Page should open. I am trying like this. The Splash Page is the start up page. This is a Splash Screen Xaml.cs: namespace NewExample.Views { public partial class SplashPage : PhoneApplicationPage { public SplashPage() { InitializeComponent(); this.DataContext = new SplashPageViewModel(); } } } This is Splash Screen View Model: Here I am check the user already logged in or not. namespace NewExample.ViewModel { public class SplashPageViewModel { public static bool isLogin = false; public SplashPageViewModel() { var rootFrame = (App.Current as App).RootFrame; if (isLogin) rootFrame.Navigate(new Uri("/Views/WelcomePage.xaml", UriKind.Relative)); else rootFrame.Navigate(new Uri("/Views/LoginPage.xaml", UriKind.Relative)); } } } But it is not working. The Splash Page only showing. This is not navigating to another page. Please help me to resolve this problem.

    Read the article

  • How to sort NSMutableArray? Code review needed

    - by JAM
    Good evening. This code works. It sorts an array of cards by both Suit and Card Value. It is also very much brute force. Can you recommend a better way? Does Objective-C help dealing with a situation where object being sorted itself has multiple fields, on which sorting depends? -(void) sort: (NSMutableArray *) deck { NSUInteger count = [deck count]; Card *thisCard; Card *nextCard; int this; int next; BOOL stillSwapping = true; while (stillSwapping) { stillSwapping = false; for (NSUInteger i = 0; i < count; ++i) { this = i; next = i+1; if (next < count) { thisCard = [deck objectAtIndex:this]; nextCard = [deck objectAtIndex:next]; if ([thisCard suit] > [nextCard suit]) { [deck exchangeObjectAtIndex:this withObjectAtIndex:next]; stillSwapping = true; } if ([thisCard suit] == [nextCard suit]) { if ([thisCard value] > [nextCard value]) { [deck exchangeObjectAtIndex:this withObjectAtIndex:next]; stillSwapping = true; } } } } } }

    Read the article

  • Should Service Depend on Many Repositories, or Break Them Up?

    - by Josh Pollard
    I'm using a repository pattern for my data access. So I basically have a repository per table/class. My UI currently uses service classes to actually get things done, and these service classes wrap, and therefore depend on repositories. In many cases my services are only dependent upon one or two repositories, so things aren't too crazy. Unfortunately, one of my forms in the UI expects the user to enter data that will span five different tables. For this form I made a single service class that depends upon five repositories. Then the methods within the service for saving and loading the data call the appropriate methods on all of the corresponding repositories. As you can imagine, the save and load methods in this service are really big. Also, unit testing this service is getting really difficult because I have to setup so many fake repositories. Would it have been a better choice to break this single service apart into a few smaller services? It would put more code at the UI layer, but would make the services smaller and more testable.

    Read the article

  • How can I bind the same dependency to many dependents in Ninject?

    - by Mike Bantegui
    Let's I have three interfaces: IFoo, IBar, IBaz. I also have the classes Foo, Bar, and Baz that are the respective implementations. In the implementations, each depends on the interface IContainer. So for the Foo (and similarly for Bar and Baz) the implementation might read: class Foo : IFoo { private readonly IDependency Dependency; public Foo(IDependency dependency) { Dependency = dependency; } public void Execute() { Console.WriteLine("I'm using {0}", Dependency.Name); } } Let's furthermore say I have a class Container which happens to contain instances of the IFoo, IBar and IBaz: class Container : IContainer { private readonly IFoo _Foo; private readonly IBar _Bar; private readonly IBaz _Baz; public Container(IFoo foo, IBar bar, IBaz baz) { _Foo = foo; _Bar = bar; _Baz = baz; } } In this scenario, I would like the implementation class Container to bind against IContainer with the constraint that the IDependency that gets injected into IFoo, IBar, and IBaz be the same for all three. In the manual way, I might implement it as: IDependency dependency = new Dependency(); IFoo foo = new Foo(dependency); IBar bar = new Bar(dependency); IBaz baz = new Baz(dependency); IContainer container = new Container(foo, bar, baz); How can I achieve this within Ninject? Note: I am not asking how to do nested dependencies. My question is how I can guarantee that a given dependency is the same among a collection of objects within a materialized service. To be extremely explicit, I understand that Ninject in it's standard form will generate code that is equivalent to the following: IContainer container = new Container(new Foo(new Dependency()), new Bar(new Dependency()), new Baz(new Dependency())); I would not like that behavior.

    Read the article

  • Android - How to decide wether to run a Service in a separate Process?

    - by pableu
    I am working on an Android application that collects sensor data over the course of multiple hours. For that, we have a Service that collects the Sensor Data (e.g. Acceleration, GPS, ..), does some processing and stores them remotely on a server. Currently, this Service runs in a separate process (using android:service=":background" in the manifest). This complicates the communication between the Activities and the Service, but my predecessors created the Application this way because they thought that separating the Service from the Activities would make it more stable. I would like some more factual reasons for the effort of running a separate process. What are the advantages? Does it really run more stable? Is the Service less likely to be killed by the OS (to free up resources) if it's in a separate process? Our Application uses startForeground() and friends to minimize the chance of getting killed by the OS. The Android docs are not very specific about this, the mostly state that it depends on the Application's purpose ;-) TL;DR What are objective reasons to put a long-running Service in a separate process (in Android)?

    Read the article

  • Windows 2008 and wrong BPL loading [SOLVED]

    - by Beto Neto
    I have an application builded with Run-time Packages. When the executable starts it auto loads the required packages (.bpl). Recently we has installed an Windows 2008 R2 server to use as Terminal Services. We maintain some old compiled versions of our application in different paths, like this: c:\app\version_1\common.bpl c:\app\version_1\app.exe c:\app\version_2\common.bpl c:\app\version_2\app.exe Common.bpl is the a run-time package what app.exe depends on. THE PROBLEM: I start "c:\app\version_2\app.exe" and it loads "c:\app\version_2\common.bpl". When I start the "c:\app\version_1\app.exe" it loads the WRONG bpl (from version_2). The path "c:\app\version_2\" isn't at the system search path. At Windows2003 server this problem doesn't occurs. What can I do to solve this? Thanks! I have downloaded the Process Explorer (microsoft sysinternals), and checked the loaded modules of each executable, all they are correct! But I noticed another problem. After start the second version, an entry-not-found-error occurs, telling me what a initialization entry point, of an unit what only exists in one of the versions, could not be found. Something is very strange. The ProcessExplorer is telling me that the process is loading the correct modoles, but when they are running this seems not be happening. Seems the applications are sharing the loaded modules. SOLVED There was a MouseHook using FindVCLWindow, this was generating the AV. Sorry about inconvenience guys, and thanks!

    Read the article

  • Should I use an interface or factory (and interface) for a cross-platform implementation?

    - by nbolton
    Example A: // pseudo code interface IFoo { void bar(); } class FooPlatformA : IFoo { void bar() { /* ... */ } } class FooPlatformB : IFoo { void bar() { /* ... */ } } class Foo : IFoo { IFoo m_foo; public Foo() { if (detectPlatformA()} { m_foo = new FooPlatformA(); } else { m_foo = new FooPlatformB(); } } // wrapper function - downside is we'd have to create one // of these for each function, which doesn't seem right. void bar() { m_foo.bar(); } } Main() { Foo foo = new Foo(); foo.bar(); } Example B: // pseudo code interface IFoo { void bar(); } class FooPlatformA : IFoo { void bar() { /* ... */ } } class FooPlatformB : IFoo { void bar() { /* ... */ } } class FooFactory { IFoo newFoo() { if (detectPlatformA()} { return new FooPlatformA(); } else { return new FooPlatformB(); } } } Main() { FooFactory factory = new FooFactory(); IFoo foo = factory.newFoo(); foo.bar(); } Which is the better option, example A, B, neither, or "it depends"?

    Read the article

  • Is it wrong for a context (right click) menu be the only way a user can perform a certain task?

    - by Eric
    I'd like to know if it ever makes sense to provide some functionality in a piece of software that is only available to the user through a context (right click) menu. It seems that in most software I've worked with the right click menu is always used as a quick way to get to features that are otherwise available from other buttons or menus. Below is a screen shot of the UI I'm developing. The tree view on the right shows the user's library of catalogs. Users can create new catalogs, or add and remove existing catalogs to and from their library. Catalogs in their library can then be opened or closed, or set to read-only. The screen shot shows the context menu I've created for the browser. Some commands can be executed independently from any specific catalog (New, Add). Yet the other commands must be applied to a specifically selected catalog (Close, Open, Remove, ReadOnly, Refresh, Clean UP, Rename). Currently the "Catalog" menu at the top of the window looks identical to this context menu. Yet I think this may be confusing to the users as the tree view which shows the currently selected catalog may not always be visible. The user may have switched to the Search or Filters tab, or the left pane may be hidden entirely. However, I'm hesitant to change the UI so that the commands that depends on a specifically selected catalog are only available through the context menu.

    Read the article

  • polymorphism in C++

    - by user550413
    I am trying to implement the next 2 functions Number& DoubleClass::operator+( Number& x); Number& IntClass::operator+(Number& x); I am not sure how to do it..(their unidirectionality is explained below): class IntClass; class DoubleClass; class Number { //return a Number object that's the results of x+this, when x is either //IntClass or DoubleClass virtual Number& operator+(Number& x) = 0; }; class IntClass : public Number { private: int my_number; //return a Number object that's the result of x+this. //The actual class of the returned object depends on x. //If x is IntClass, then the result if IntClass. //If x is DoubleClass, then the results is DoubleClass. public: Number& operator+(Number& x); }; class DoubleClass : public Number { private: double my_number; public: //return a DoubleClass object that's the result of x+this. //This should work if x is either IntClass or DoubleClass Number& operator+( Number& x); };

    Read the article

  • Pagination in Java

    - by user569125
    I wrote paging logic: My requirement: total elements to display:100 per page,if i click next it should display next 100 records,if i click previous 100 records. Initial varaible values: showFrom:1, showTo:100 max elements:depends on size of data. pageSize:100. Code: if(p*emphasized text*aging.getAction().equalsIgnoreCase("Next")){ paging.setTotalRec(availableList.size()); showFrom = (showTo + 1); showTo = showFrom + 100- 1; if(showTo >= paging.getTotalRec()) showTo = paging.getTotalRec(); paging.setShowFrom(showFrom); paging.setShowTo(showTo); } else if(paging.getAction().equalsIgnoreCase("Previous")){ showTo = showFrom - 1; showFrom = (showFrom - 100); paging.setShowTo(showTo); paging.setShowFrom(showFrom); paging.setTotalRec(availableList.size()); } Here i can remove and add the elements to the existing data.above code works fine if i add and remove few elements.but if i remove or add 100 elements at a time counts are not displaying properly above code works fine if i add and remove few elements.

    Read the article

  • Best (in your opinion) GIT workflow for case when releases are done on demand (in most cases 1-2 tickets at once)

    - by Robert
    I'm rather a Git newbie and I'm looking for your advice. In the company I work for we have a "workflow" where we have a single Git repo for our project with 2 branches: master and prod. All devs work on the master branch. If a ticket is done (from the dev perspective), we push to the repo. If all tests are passed, we make a release. The issue is that in most cases, the request from business guys sounds like: "please release ticket A or A && B". In most cases, I end up doing something like git checkout prod git cherry-pick --no-commit commit_hash git commit -m "blah blah to prod" -a As you can see this is not a perfect solution, and I'm under a huge impression this is a perfect way to nowhere especially when change A depends on changes B and C. Do you have any suggestions how to handle releases on demand if more devs works on the same branch and the flow looks like I described above? All suggestions are welcome. I cannot change business processes and it will have to stay as it is - unfortunately.

    Read the article

  • Time Service will not start on Windows Server - System error 1290

    - by paradroid
    I have been trying to sort out some time sync issues involving two domain controllers and seem to have ended up with a bigger problem. It's horrible. They are both virtual machines (one being on Amazon EC2), which I think may complicate things regarding time servers. The primary DC with all the FSMO roles is on the LAN. I reset its time server configuration like this (from memory): net stop w32time w23tm /unregister shutdown /r /t 0 w32tm /register w32tm /config /manualpeerlist:”0.uk.pool.ntp.org,1.uk.pool.ntp.org,2.uk.pool.ntp.org,3.uk.pool.ntp.org” /syncfromflags:manual /reliable:yes /update W32tm /config /update net start w32time reg QUERY HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\Config /v AnnounceFlags I checked to see if it was set to 0x05, which it was. The output for... w32tm /query /status Leap Indicator: 0(no warning) Stratum: 1 (primary reference - syncd by radio clock) Precision: -6 (15.625ms per tick) Root Delay: 0.0000000s Root Dispersion: 10.0000000s ReferenceId: 0x4C4F434C (source name: "LOCL") Last Successful Sync Time: 10/04/2012 15:03:27 Source: Local CMOS Clock Poll Interval: 6 (64s) While this was not what was intended, I thought I would sort it out after I made sure that the remote DC was syncing with it first. On the Amazon EC2 remote replica DC (Windows Server 2008 R2 Core)... net stop w32time w32tm /unregister shutdown /r /t 0 w32time /register net start w32time This is where it all goes wrong System error 1290 has occurred. The service start failed since one or more services in the same process have an incompatible service SID type setting. A service with restricted service SID type can only coexist in the same process with other services with a restricted SID type. If the service SID type for this service was just configured, the hosting process must be restarted in order to start this service. I cannot get the w32time service to start. I've tried resetting the time settings and tried to reverse what I have done. The Ec2Config service cannot start either, as it depends on the w32time service. All the solutions I have seen involve going into the telephony service registry settings, but as it is Server Core, it does not have that role, and I cannot see the relationship between that and the time service. w32time runs in the LocalService group and this telephony service which does not exist on Core runs in the NetworkService group. Could this have something to do with the process (svchost.exe) not being able to be run as a domain account, as it now a domain controller, but originally it ran as a local user group, or something like that? There seem to be a lot of cases of people having this problem, but the only solution has to do with the (non-existant on Core) telephony service. Who even uses that?

    Read the article

  • Internet Explorer keeps asking for NTLM credentials in Intranet zone

    - by Tomalak
    Long text, sorry for that. I'm trying to be as specific as possible. I'm on Windows 7 and I experience a very frustrating Internet Explorer 8 behavior. I'm in a company LAN with some intranet servers and a proxy for connecting with the outside world. On sites that are clearly recognized as being "Local Intranet" (as indicated in the IE status bar) I keep getting "Windows Security" dialog boxes that ask me to log in. These pages are served off an IIS6 with "Integrated Windows Security" enabled, NTFS permits Everyone:Read on the files themselves. If I enter my Windows credentials, the page loads fine. However, the dialog boxes will be popping up the next time, regardless if I ticked "Remember my credentials" or not. (Credentials are stored in the "Credential Manager" but that does not make any difference as to how often these login boxes appear.) If I click "Cancel", one of two things can happen: Either the page loads with certain resources missing (images, styleheets, etc), or it does not load at all and I get HTTP 401.2 (Unauthorized: Logon Failed Due to Server Configuration). This depends on whether the logon box was triggered by the page itself, or a referenced resource. The behavior appears to be completely erratic, sometimes the pages load smoothly, sometimes one resource triggers a logon message, sometimes it does not. Even simply re-loading the page can result in changed behavior. I'm using WPAD as my proxy detection mechanism. All Intranet hosts do bypass the proxy in the PAC file. I've checked every IE setting I can think of, entered host patterns, individual host names, IP ranges in every thinkable configuration to the "Local Intranet" zone, ticked "Include all sites that bypass the proxy server", you name it. It boils down to "sometimes it just does not work", and slowly I'm losing my mind. ;-) I'm aware that this is related to IE not automatically passing my NTLM credentials to the webserver but asking me instead. Usually this should only happen for NTLM-secured sites that are not recognized as being in the "Intranet" zone. As explained, this is not the case here. Especially since half of a page can load perfectly and without interruption and some page's resources (coming from the same server!) trigger the login message. I've looked at http://support.microsoft.com/kb/303650, which gives the impression of describing the problem, but nothing there seems to work. And frankly, I'm not certain if "manually editing the registry" is the right solution for this kind of problem. I'm not the only person in the world with an IE/intranet/IIS configuration, after all. I'm at a loss, can somebody give me a hint?

    Read the article

  • How to set up Mod_WSGI for Python on Ubuntu

    - by AutomatedTester
    Hi, I am trying to setup MOD_WSGI on my Ubuntu box. I have found steps that said I needed to do the following steps I found at http://ubuntuforums.org/showthread.php?t=833766 sudo apt-get install libapache2-mod-wsgi sudo a2enmod mod-wsgi sudo /etc/init.d/apache2 restart sudo gedit /etc/apache2/sites-available/default and update the Directory <Directory /var/www/> Options Indexes FollowSymLinks MultiViews ExecCGI AddHandler cgi-script .cgi AddHandler wsgi-script .wsgi AllowOverride None Order allow,deny allow from all </Directory> sudo /etc/init.d/apache2 restart Created test.wsgi with def application(environ, start_response): status = '200 OK' output = 'Hello World!' response_headers = [('Content-type', 'text/plain'), ('Content-Length', str(len(output)))] start_response(status, response_headers) return [output] Step 2 fails because it says it can't find mod-wsgi even though the apt-get found it. If I carry on with the steps the python app just shows as plain text in a browser. Any ideas what I have done wrong? EDIT: Results for questions asked automatedtester@ubuntu:~$ dpkg -l libapache2-mod-wsgi Desired=Unknown/Install/Remove/Purge/Hold | Status=Not/Inst/Cfg-files/Unpacked/Failed-cfg/Half-inst/trig-aWait/Trig-pend |/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad) ||/ Name Version Description +++-======================================-======================================-============================================================================================ ii libapache2-mod-wsgi 2.5-1 Python WSGI adapter module for Apache automatedtester@ubuntu:~$ dpkg -s libapache2-mod-wsgi Package: libapache2-mod-wsgi Status: install ok installed Priority: optional Section: python Installed-Size: 376 Maintainer: Ubuntu MOTU Developers <[email protected]> Architecture: i386 Source: mod-wsgi Version: 2.5-1 Depends: apache2, apache2.2-common, libc6 (>= 2.4), libpython2.6 (>= 2.6), python (>= 2.5), python (<< 2.7) Suggests: apache2-mpm-worker | apache2-mpm-event Conffiles: /etc/apache2/mods-available/wsgi.load 06d2b4d2c95b28720f324bd650b7cbd6 /etc/apache2/mods-available/wsgi.conf 408487581dfe024e8475d2fbf993a15c Description: Python WSGI adapter module for Apache The mod_wsgi adapter is an Apache module that provides a WSGI (Web Server Gateway Interface, a standard interface between web server software and web applications written in Python) compliant interface for hosting Python based web applications within Apache. The adapter provides significantly better performance than using existing WSGI adapters for mod_python or CGI. Original-Maintainer: Debian Python Modules Team <[email protected]> Homepage: http://www.modwsgi.org/ automatedtester@ubuntu:~$ sudo a2enmod libapache2-mod-wsgi ERROR: Module libapache2-mod-wsgi does not exist! automatedtester@ubuntu:~$ sudo a2enmod mod-wsgi ERROR: Module mod-wsgi does not exist! FURTHER EDIT FOR RMYates automatedtester@ubuntu:~$ apache2ctl -t -D DUMP_MODULES apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1 for ServerName Loaded Modules: core_module (static) log_config_module (static) logio_module (static) mpm_worker_module (static) http_module (static) so_module (static) alias_module (shared) auth_basic_module (shared) authn_file_module (shared) authz_default_module (shared) authz_groupfile_module (shared) authz_host_module (shared) authz_user_module (shared) autoindex_module (shared) cgid_module (shared) deflate_module (shared) dir_module (shared) env_module (shared) mime_module (shared) negotiation_module (shared) python_module (shared) setenvif_module (shared) status_module (shared) Syntax OK automatedtester@ubuntu:~$

    Read the article

  • Backup VM : Copy virtual disk xxx.vmdk The virtual disk is either corrupted or not a supported forma

    - by boiteavinc
    Hi I'm french. I'm student and I use ESXI4 for my studies. When I try backup my VMs, with ghettoVCBg2.pl and vSphere Management Assistant, I get the following error message on Vsphere Client : "Copy virtual disk xxx.vmdk The virtual disk is either corrupted or not a supported format". I have no error in the log VCB ; 03-23-2010 08:31:56 -- debug: Main: Login by vi-fastpass to: esxi 03-23-2010 08:31:56 -- debug: copyTask: Task START 03-23-2010 08:31:56 -- debug: copyTask: waiting for next job and sleep ... 03-23-2010 08:32:00 -- info: Initiate backup for AD_DNS_DHCP found on esxi 03-23-2010 08:32:09 -- debug: AD_DNS_DHCP original powerState: poweredOn 03-23-2010 08:32:09 -- debug: Creating Snapshot "ghettoVCBg2-snapshot-2010-03-23" for AD_DNS_DHCP 03-23-2010 08:33:19 -- info: AD_DNS_DHCP has 1 VMDK(s) 03-23-2010 08:33:19 -- debug: backupVMDK: Backing up "Raptor1 AD_DNS_DHCP/AD_DNS_DHCP.vmdk" to "Backup_VM VM/AD_DNS_DHCP/AD_DNS_DHCP-2010-03-23/$ 03-23-2010 08:33:19 -- debug: backupVMDK: Signal copyThread to start 03-23-2010 08:33:19 -- debug: backupVMDK: Backup progress: Elapsed time 0 min 03-23-2010 08:33:19 -- debug: copyTask: Wake up and follow the white rabbit, with status: doCopy 03-23-2010 08:33:19 -- debug: CopyThread: Start backing up VMDK(s) ... 03-23-2010 08:33:25 -- debug: copyTask: send copySuccess message ... 03-23-2010 08:33:25 -- debug: copyTask: waiting for next job and sleep ... 03-23-2010 08:34:20 -- debug: backupVMDK: Successfully completed backup for Raptor1 AD_DNS_DHCP/AD_DNS_DHCP.vmdk Elapsed time: 1 min 03-23-2010 08:34:22 -- debug: Removing Snapshot "ghettoVCBg2-snapshot-2010-03-23" for AD_DNS_DHCP 03-23-2010 08:34:24 -- debug: checkVMBackupRotation: Starting ... 03-23-2010 08:34:26 -- debug: Purging Backup_VM VM/AD_DNS_DHCP/AD_DNS_DHCP-2010-03-23--1 due to rotation max 03-23-2010 08:34:28 -- info: Backup completed for AD_DNS_DHCP! 03-23-2010 08:34:28 -- debug: Main: Disconnect from: esxi 03-23-2010 08:34:28 -- debug: Main: Calling final clean up 03-23-2010 08:34:28 -- debug: cleanUP: Thread clean up starting ... 03-23-2010 08:34:28 -- debug: cleanUp: Send exit to copyThread 03-23-2010 08:34:28 -- debug: copyTask: Wake up and follow the white rabbit, with status: exit 03-23-2010 08:34:28 -- debug: copyTask: die ... 03-23-2010 08:34:28 -- debug: cleanUp: Join passed 03-23-2010 08:34:28 -- info: ============================== ghettoVCBg2 LOG END ============================== My ghetto conf file is : VM_BACKUP_DATASTORE = "Backup_VM" VM_BACKUP_DIRECTORY = "VM" VM_BACKUP_ROTATION_COUNT = "3" DISK_BACKUP_FORMAT = "thin" ADAPTER_FORMAT = "lsilogic" POWER_VM_DOWN_BEFORE_BACKUP = "0" VM_SNAPSHOT_MEMORY = "1" VM_SNAPSHOT_QUIESCE = "1" LOG_LEVEL = "info" VM_VMDK_FILES = "all" I tried several I tried several DISK_BACKUP_FORMAT, I have same error. Despite the error, even when I get files on the NFS share. But when I try to open the vmx file with vmware workstation. I get this error: Can not open the disk 'G: \ Backup ESX \ VM \ AD_DNS_DHCP \ AD_DNS_DHCP-2010-03-23 - 1 \ AD_DNS_DHCP.vmdk' or one of the snapshot disks it depends on. Reason: The called function can not be performed on partial chains. Please open the parent virtual disk. I have no snapshot on my VM on ESXi. Can you help me ?

    Read the article

  • Unclear pricing of Windows Azure

    - by Dirk
    How do you people think about the Windows Azure pricing model and the way it is presented to the user? I just found out that Azure keeps charging hours for STOPPED instances. I just received a bill from more than 100 euro for 3 STOPPED instances (not) running "HelloAzure". I the past I also played around with Amazon Web Services. Amazon doesn't charge for stopped instances. I was wondering: "Should I have known this before, or is Microsoft doing a bad job in clear communication in the pricing model?" Quote from http://www.microsoft.com/windowsazure/pricing/ : Compute time, measured in service hours: Windows Azure compute hours are charged only for when your application is deployed. When developing and testing your application, developers will want to remove the compute instances that are not being used to minimize compute hour billing. Partial compute hours are billed as full hours. I read this, so I stopped all instances after a few hours playing around. Now it seems I should have deleted them, not just "stopped". Strictly speaking, all depends on the definition of the word "deployed". If you upload an application, but it is not running, can it still be regarded as being "deployed"? May be, but when you read this for the first time, with AWS experience in mind, I don't think it's 100% clear what this means. Technically speaking, an uploaded application only uses (read: should only use / needs only) a few MB harddrive space. It doesn't require any CPU time. If Azure wants to reserve CPU's for not running instances.. well, that's Azure's choice, not mine. I don't want to spread a hate campaign at all, but I do want to know how people think about this subject. Should Microsoft be more clear about their pricing model or do you think it's clear enough? Second question: did anyone got refunded for a similar case? Thanks in advance! UPDATE 27-01-2011 I sent an email to customer support a few days ago, but I guess that didn't reach anu human being because I didn't hear anything from it. So, I made a telephone call today with a Dutch customer support representative (I live in Holland). She totally understood the problem and she's trying to get a refund for me. However, she mentioned that "usually these refund requests are denied", but she's going to try. She also mentioned that I'm not the first one with this (or similar) problem. UPDATE 28-01-2011 I just received a phonecall from Microsoft support. The lady told me some good news: the money will refunded. However, the invoice has not been made yet, and my creditcard will first be chardged, after which it will be refunded, but hey, that's no problem for me! I'm glad the way it's solved! Thanks everybody!

    Read the article

  • How to properly shutdown a Linux VMware Server Host

    - by Mikee
    Hi Everybody: In our lab, we have a server running Ubuntu Linux 8.04.4 x64, with VMware Server 2.1 hosting 4 VM's. I have a major concern with regards to shutting down the host server. Mostly, how do I ensure that the guest VM's are being shut down safely? In the VMware web interface console, I have enabled: "Allow virtual machines to start and stop automatically with the system" I enabled the Default Startup Delay for 15 seconds along with the "Start next VM immediately if the VMware Tools start" option checked I enabled the Default Shutdown Delay with a 60 second shutdown delay and a Shutdown Action of "Shut Down Guest" All VM's have the VMware Tools installed and properly working. All VM's are moved up into the "Specified Order" section of "Startup Order", thus when powering the server back on, all those VM's should start up again in that specified order. When I went to shut down the server, I used the shutdown -h now command. Based on the settings I entered above, I was expecting a 4 minute shutdown, as there is an option to delay the shutdown of each VM by 60 seconds. However, that is not what happened. Instead, the server shutdown in under a minute. When I powered the server back on, only 2 VM's properly loaded. The other 2 showed the following error: "Power on Virtual Machine" failed to complete If these problems problems persist, please contact your system administrator. Details: Cannot open the disk '[location to .vmdk]' or one of the snapshot disks it depends on. Reason: Failed to lock the file. Obviously, if this error occured, then it is clear to me that the VM's were not properly shutdown, or the server powered off before the VM's were completely shutdown. I have fixed the above error by deleting the .lck files in the respective VM directories. How would I know if the VM's were properly shut down? I checked the VMware-server logs, but they only seem to display the logs of when the vmware-mgmt service is running in the current session. I'm mostly running Linux VM's, so is there an easy way to know whether or not a server was properly shut down in Linux? Thank you all for the help!

    Read the article

  • Mercurial browser on Windows 2003 takes several refreshes before displaying repositories

    - by Tim Murphy
    When attempt to browse my Mercurial repositories it usually takes several refreshes before the repository list is displayed. The configuration is as follows: Windows Server 2003 (Dedicated machine hosted by http://www.server4you.com/. Site has anonymous password protection with self-signed SSL. Mercurial 1.5.3 Python 2.6.5 Python for Windows 32 extensions 214 py2.6 isapi-wsgi 0.4.2 The repositories are being served via ISAPI using the standard hgwebdir_wspi.py file (copy to follow). Other problems with the repository server: Before doing a clone/push/etc I have to browse the repositories first otherwise hg on my local machine can not locate the site. I have one a repository with a large changeset that after a minute or so throw error "abort: error: An existing connection was forcibly closed by the remote host". Will be asking another question for this problem. What can I do to start tracking down this problem? hgwebdir_wsgi.py # Configuration file location hgweb_config = r'C:\Public\Mercurial\WebSite\hgweb.config' # Global settings for IIS path translation path_strip = 0 # Strip this many path elements off (when using url rewrite) path_prefix = 0 # This many path elements are prefixes (depends on the # virtual path of the IIS application). import sys # Adjust python path if this is not a system-wide install #sys.path.insert(0, r'c:\path\to\python\lib') # Enable tracing. Run 'python -m win32traceutil' to debug if hasattr(sys, 'isapidllhandle'): import win32traceutil # To serve pages in local charset instead of UTF-8, remove the two lines below import os os.environ['HGENCODING'] = 'UTF-8' import isapi_wsgi from mercurial import demandimport; demandimport.enable() from mercurial.hgweb.hgwebdir_mod import hgwebdir # Example tweak: Replace isapi_wsgi's handler to provide better error message # Other stuff could also be done here, like logging errors etc. class WsgiHandler(isapi_wsgi.IsapiWsgiHandler): error_status = '500 Internal Server Error' # less silly error message isapi_wsgi.IsapiWsgiHandler = WsgiHandler # Only create the hgwebdir instance once application = hgwebdir(hgweb_config) def handler(environ, start_response): # Translate IIS's weird URLs url = environ['SCRIPT_NAME'] + environ['PATH_INFO'] paths = url[1:].split('/')[path_strip:] script_name = '/' + '/'.join(paths[:path_prefix]) path_info = '/'.join(paths[path_prefix:]) if path_info: path_info = '/' + path_info environ['SCRIPT_NAME'] = script_name environ['PATH_INFO'] = path_info return application(environ, start_response) def __ExtensionFactory__(): return isapi_wsgi.ISAPISimpleHandler(handler) if __name__=='__main__': from isapi.install import * params = ISAPIParameters() HandleCommandLine(params) hgweb.config [paths] / = C:\Public\Mercurial\Repositories\* [web] allow_archive = bz2 gz zip ; Allows archive downloads. allow_push = ######## ; Users that are allowed to push.

    Read the article

  • ASA and cisco vs NSA sonic firewall

    - by Lbaker101
    Currently I’m trying to structure our network to fully support and be redundant with BGP/Multi homing. Our current company size is 40 employees but the major part of that is our Development department. We are a software company and continued connection to the internet is a requirement as 90% of work stops when the net goes down. The only thing hosted on site (that needs to remain up) is our exchange server. Right now i'm faced with 2 different directions and was wondering if I could get your opinions on this. We will have 2 ISPs that are both 20meg up/down and dedicated fiber (so 40megs combined). This is handed off as an Ethernet cable into our server room. ISP#1 first digital ISP#2 CenturyLink we currently have 2x ASA5505s but the 2nd one is not in use. It was there to be a failover and it just needs the security+ license to be matched with the primary device. But this depends on the network structure. I have been looking into the hardware that would be required to be fully redundant and I found that we will either of the following. 2x Cisco 2921+ series routers with failover licenses. They will go in front of the ASAs and either connects in a failover state or 1 ISP into each of the 2921 series routers and then 1 line into each of the ASAs (thus all 4 hardware components will be used actively). So 2x Cisco 2921+ series routers 2x Cisco ASA5505 firewalls The other route 2x SonicWalls NSA2400MX series. 1 primary and the secondary will be in a failover state. This will remove the ASAs from the network and be about 2k cheaper than the cisco route. This also brings down the points of failure because it’s just the 2x sonicwalls It will also allow us to scale all the way up to 200-400 users (depending on their configuration). This also makes so the Sonic walls. So the real question is with the added functionality ect of the sonicwall is there a point in paying so much more to stay the cisco route? Thanks!

    Read the article

  • Anyone else experiencing high rates of Linux server crashes during a leap second day?

    - by Bron Gondwana
    POSTMORTEM Anticlimax: only thing that died was my VPN (openvpn) link to the cluster, so there was an exciting few seconds while it re-established. Everything else was fine. Starting back ntp everywhere. If you look at Marco's blog at http://my.opera.com/marcomarongiu/blog/2012/06/01/an-humble-attempt-to-work-around-the-leap-second - he has a solution for phasing the time change over 24 hours using ntpd -x to avoid the 1 second skip. Give that a go if it matters to you. For the systems I run, the jump isn't a problem. Just today, Sat June 30th - starting soon after the start of the day GMT. We've had a handful of blades in different datacentres as managed by different teams all go dark - not responding to pings, screen blank. They're all running Debian Squeeze - with everything from stock kernel to custom 3.2.21 builds. Most are Dell M610 blades, but I've also just lost a Dell R510 and other departments have lost machines from other vendors too. There was also an older IBM x3550 which crashed and which I thought might be unrelated, but now I'm wondering. The one crash which I did get a screen dump from said: [3161000.864001] BUG: spinlock lockup on CPU#1, ntpd/3358 [3161000.864001] lock: ffff88083fc0d740, .magic: dead4ead, .owner: imapd/24737, .owner_cpu: 0 Unfortunately the blades all supposedly had kdump configured, but they died so hard that kdump didn't trigger - and they had console blanking turned on. I've disabled console blanking now, so fingers crossed I'll have more information after the next crash. Just want to know if it's a common thread or "just us". It's really odd that they're different units in different datacentres bought at different times and run by different admins (I run the FastMail.FM ones)... and now even different vendor hardware. Most of the machines which crashed had been up for weeks/months and were running 3.1 or 3.2 series kernels. The most recent crash was a machine which had only been up about 6 hours running 3.2.21. THE WORKAROUND Ok people, here's how I worked around it. disabled ntp: /etc/init.d/ntp stop created http://linux.brong.fastmail.fm/2012-06-30/fixtime.pl (code stolen from Marco, see blog posts in comments) ran fixtime.pl without an argument to see that there was a leap second set ran fixtime.pl with an argument to remove the leap second NOTE: depends on adjtimex. I've put a copy of the squeeze adjtimex binary at http://linux.brong.fastmail.fm/2012-06-30/adjtimex - it will run without dependencies on a squeeze 64 bit system. If you put it in the same directory as fixtime.pl, it will be used if the system one isn't present. Obviously if you don't have squeeze 64 bit... find your own. I'm going to start ntp again tomorrow. As an anonymous user suggested - an alternative to running adjtimex is to just set the time yourself, which will presumably also clear the leapsecond counter.

    Read the article

  • What part of SMF is likely broken by a hard power down?

    - by David Mackintosh
    At one of my customer sites, the local guy shut down their local Solaris 10 x86 server, pulled the power inputs, moved it, and now it won’t start properly. It boots and then presents a prompt which lets you log in. This appears to be single user milestone (or equivalent). Digging into it, I think that SMF isn’t permitting the system to go multi-user. SMF was generating a ton of errors on autofs, after some fooling with it I got it to generate errors on inetd and nfs/client instead. This all tells me that the problem is in some SMF state file or database that needs to be fixed/deleted/recreated or something, but I don’t know what the actual issue is. By “generate errors”, I mean that every second I get a message on the console saying “Method or service exit timed out. Killing contract <#.” This makes interacting with the computer difficult. Running svcs –xv shows the service as “enabled”, in state “disabled”, reason “Start method is running”. Fooling with svcadm on the service does nothing, except confirm that the service is not in a Maintenance state. Logs in /lib/svc/log/$SERVICE just tell you that this loop has been happening once per second. Logs in /etc/svc/volatile/$SERVICE confirm that at boot the service is attempted to start, and immediately stopped, no further entries. Note that system-log isn’t starting because system-log depends on autofs so I have no syslog or dmesg. Googling all these terms ends up telling me how to debug/fix either autofs or nfs/client or inetd or rpc/gss (which was the dependency that SMF was using as an excuse to prevent nfs/client from “starting”, it was claiming that rpc/gss was “undefined” which is incorrect since this all used to work. I re-enabled it with inetadm, but inetd still won’t start properly). But I think that the problem is SMF in general, not the individual services. Doing a restore_repository to the “manifest_import” does nothing to improve, or even detectibly change, the situation. I didn’t use a boot backup because the last boot(s) were not useful. I have told the customer that since the valuable data directories are on a separate file system (which fsck’s as clean so it is intact) we could just re-install solaris 10 on the / partition. But that seems like an awfully windows-like solution to inflict on this problem. So. Any ideas what piece is broken and how I might fix it?

    Read the article

  • Some questions regarding Hostname

    - by user481913
    I just bought a new VPS hosting plan and i have a few questions. Hope someone here can clear the doubts for me. 1) Is it necessary to have a real domain for a vps hostname? I suppose i can just use a non-real domain like anydomain.com and something like 'server' for the computer name. Therefore i'll end up with something like server.anydomain.com as the vps's hostname. I want to do this for the sake of putting in a hostname to configure the vps to get it going . So, since this non-real domain name does not need to be publicly accessible i don't need to register or own it and instead access the server by the ip address. Is that correct? But i suppose that this also depends upon if my web host allows that? 2)I would also like to run some real sites with real domain names on this vps, so can i just configure the zone file on the primary nameserver and make entries for these domains and point an A record at the Vps's IP to make them publicly accessible over the internet? For example for my 1st domain i could make an entry like this: $TTL 86400 mydomain1.com. IN SOA ns1.mywebhost.com. \ admin.mydomain1.com. ( 2004011522 ; Serial no., based on date 21600 ; Refresh after 6 hours 3600 ; Retry after 1 hour 604800 ; Expire after 7 days 3600 ; Minimum TTL of 1 hour ) server IN A 200._._._ ns1.mywebhost.com. IN A 216._._._ ns2.mywebhost.com. IN A 205._._._ @ IN NS ns1.mywebhost.com. @ IN NS ns2.mywebhost.com. @ IN MX 10 server www IN CNAME server server IN CNAME @ (so this particular line tells the nameserver to point the url mydomain1.com to server.anydomain.com at the particular ip addresss in the A record.... is that right?) Similarly for my 2nd domain i could have a similar entry : $TTL 86400 mydomain2.com. IN SOA ns1.mywebhost.com. \ admin.mydomain2.com. (..... ............................so on........ ......................................... ......................................... ......................................... ......................................... ......................................... Is that correct? 3) Suppose for my vps hostname, i ignorantly chose a domain that someone else alreadys owns , however i think that it won't affect the public accessibility of the real domain or website since only the real owner of the domain has the rights to provide for the nameservers addresses in the TLD registeries through his Domian Registerar? Is that correct? 4)Can i change my vps's hostname later? Would this create any complications?

    Read the article

  • What the best way to achieve RPO of zero and lowest possible RTO (less than 15 minutes) with SQL 2008 R2?

    - by Adrian Hope-Bailie
    We are running a payments (EFT transaction processing) application which is processing high volumes of transactions 24/7 and are currently investigating a better way of doing DB replication to our disaster recovery site. Our current and previous strategies have included using both DoubleTake and Redgate to replicate data to a warm stand-by. DoubleTake is the supported solution from the payments software vendor however their (DoubleTake's) support in South Africa is very poor. We had a few issues and simply couldn't ever resolve them so we had to give up on DoubleTake. We have been using Redgate to manually read the data from the primary site (via queries) and write to the DR site but this is: A bad solution Getting the software vendor hot and bothered whenever we have support issues as it has a tendency to interfere with the payment application which is very DB intensive. We recently upgraded the whole system to run on SQL 2008 R2 Enterprise which means we should probably be looking at using some of the built-in replication features. The server has 2 fairly large databases with a mixture of tables containing highly volatile transactional data and pretty static configuration data. Replication would be done over a WAN link to a separate physical site and needs to achieve the following objectives. RPO: Zero loss - This is transactional data with financial impact so we can't lose anything. RTO: Tending to zero - The business depends on our ability to process transactions every minute we are down we are losing money I have looked at a few of the other questions/answers but none meet our case exactly: SQL Server 2008 failover strategy - Log shipping or replication? How to achieve the following RTO & RPO with logshipping only using SQL Server? What is the best of two approaches to achieve DB Replication? My current thinking is that we should use mirroring but I am concerned that for RPO:0 we will need to do delayed commits and this could impact the performance of the primary DB which is not an option. Our current DR process is to: Stop incoming traffic to the primary site and allow all in-flight transaction to complete. Allow the replication to DR to complete. Change network routing to route to DR site. Start all applications and services on the secondary site (Ideally we can change this to a warmer stand-by whereby the applications are already running but not processing any transactions). In other words the DR database needs to, as quickly as possible, catch up with primary and be ready for processing as the new primary. We would then need to be able to reverse this when we are ready to switch back. Is there a better option than mirroring (should we be doing log-shipping too) and can anyone suggest other considerations that we should keep in mind?

    Read the article

< Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >