Search Results

Search found 11930 results on 478 pages for 'shared machines'.

Page 10/478 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • Backing up Windows machines using rsync over SSH

    - by user38118
    We have a number of Windows XP / Windows 7 machines which need to be backed up nightly to a Linux file server. We would like to do it with rsync and rsnapshot as that's what we're familiar with already from the rest of our Linux/FreeBSD machines. We tried DeltaCopy, but DeltaCopy proved to be troublesome- lots of problems getting it to log in via SSH automatically, and the Windows Scheduled Tasks seem to fail often. Is there a reliable way/application which can back up Windows machines via rsync to a r

    Read the article

  • XML Schema: Namespace issues when importing shared elements

    - by netzwerg
    When trying to import shared definitions from a XML Schema, I can properly reference shared types, but referencing shared elements causes validation errors. This is the schema that imports the shared definitions (example.xsd): <?xml version="1.0" encoding="UTF-8"?> <xs:schema elementFormDefault="qualified" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:shared="http://shared.com"> <xs:import namespace="http://shared.com" schemaLocation="shared.xsd"/> <xs:element name="example"> <xs:complexType> <xs:sequence> <xs:element ref="importedElement"/> <xs:element ref="importedType"/> </xs:sequence> </xs:complexType> </xs:element> <xs:element name="importedElement"> <xs:complexType> <xs:sequence> <xs:element ref="shared:fooElement"/> </xs:sequence> </xs:complexType> </xs:element> <xs:element name="importedType"> <xs:complexType> <xs:sequence> <xs:element name="bar" type="shared:barType"/> </xs:sequence> </xs:complexType> </xs:element> </xs:schema> These are the shared definitions (shared.xsd): <?xml version="1.0" encoding="UTF-8"?> <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns="http://shared.com" targetNamespace="http://shared.com"> <xs:element name="fooElement"> <xs:simpleType> <xs:restriction base="xs:integer"/> </xs:simpleType> </xs:element> <xs:simpleType name="barType"> <xs:restriction base="xs:integer"/> </xs:simpleType> </xs:schema> Now consider this XML instance: <?xml version="1.0"?> <example xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="example.xsd"> <importedElement> <fooElement>42</fooElement> </importedElement> <importedType> <bar>42</bar> </importedType> </example> When validated, the "importedType" works perfectly fine, but the "importedElement" gives the following error: Invalid content was found starting with element 'fooElement'. One of '{"http://shared.com":fooElement}' is expected I would guess that my troubles are related to namespace issues (hence the somehow misleading "got fooElement but was expecting fooElement") -- any hints on what's wrong here?

    Read the article

  • Can I use boost::make_shared with a private constructor?

    - by Billy ONeal
    Consider the following: class DirectoryIterator; namespace detail { class FileDataProxy; class DirectoryIteratorImpl { friend class DirectoryIterator; friend class FileDataProxy; WIN32_FIND_DATAW currentData; HANDLE hFind; std::wstring root; DirectoryIteratorImpl(); explicit DirectoryIteratorImpl(const std::wstring& pathSpec); void increment(); public: ~DirectoryIteratorImpl() {}; }; class FileDataProxy //Serves as a proxy to the WIN32_FIND_DATA struture inside the iterator. { friend class DirectoryIterator; boost::shared_ptr<DirectoryIteratorImpl> iteratorSource; FileDataProxy(boost::shared_ptr<DirectoryIteratorImpl> parent) : iteratorSource(parent) {}; public: std::wstring GetFolderPath() const { return iteratorSource->root; } }; } class DirectoryIterator : public boost::iterator_facade<DirectoryIterator, detail::FileDataProxy, std::input_iterator_tag> { friend class boost::iterator_core_access; boost::shared_ptr<detail::DirectoryIteratorImpl> impl; void increment() { impl->increment(); }; detail::FileDataProxy dereference() const { return detail::FileDataProxy(impl); }; public: DirectoryIterator() { impl = boost::make_shared<detail::DirectoryIteratorImpl>(); }; }; It seems like DirectoryIterator should be able to call boost::make_shared<DirectoryIteratorImpl>, because it is a friend of DirectoryIteratorImpl. However, this code fails to compile because the constructor for DirectoryIteratorImpl is private. Since this class is an internal implementation detail that clients of DirectoryIterator should never touch, it would be nice if I could keep the constructor private. Is this my fundamental misunderstanding around make_shared or do I need to mark some sort of boost piece as friend in order for the call to compile?

    Read the article

  • Communication between state machines with hidden transitions

    - by slartibartfast
    The question emerged for me in embedded programming but I think it can be applied to quite a number of general networking situations e.g. when a communication partner fails. Assume we have an application logic (a program) running on a computer and a gadget connected to that computer via e.g. a serial interface like RS232. The gadget has a red/green/blue LED and a button which disables the LED. The LEDs color can be driven by software commands over the serial interface and the state (red/green/blue/off) is read back and causes a reaction in the application logic. Asynchronous behaviour of the application logic with regard to the LED color down to a certain delay (depending on the execution cycle of the application) is tolerated. What we essentially have is a resource (the LED) which can not be reserved and handled atomically by software because the (organic) user can at any time press the button to interfere/break the software attempt to switch the LED color. Stripping this example from its physical outfit I dare to say that we have two communicating state machines A (application logic) and G (gadget) where G executes state changes unbeknownst to A (and also the other way round, but this is not significant in our example) and only A can be modified at a reasonable price. A needs to see the reaction and state of G in one piece of information which may be (slightly) outdated but not inconsistent with respect to the short time window when this information was generated on the side of G. What I am looking for is a concise method to handle such a situation in embedded software (i.e. no layer/framework like CORBA etc. available). A programming technique which is able to map the complete behaviour of both participants on classical interfaces of a classical programming language (C in this case). To complicate matters (or rather, to generalize), a simple high frequency communication cycle of A to G and back (IOW: A is rapidly polling G) is out of focus because of technical restrictions (delay of serial com, A not always active, etc.). What I currently see as a general solution is: the application logic A as one thread of execution an adapter object (proxy) PG (presenting G inside the computer), together with the serial driver as another thread a communication object between the two (A and PG) which is transactionally safe to exchange The two execution contexts (threads) on the computer may be multi-core or just interrupt driven or tasks in an RTOS. The com object contains the following data: suspected state (written by A): effectively a member of the power set of states in G (in our case: red, green, blue, off, red_or_green, red_or_blue, red_or_off...etc.) command data (written by A): test_if_off, switch_to_red, switch_to_green, switch_to_blue operation status (written by PG): operation_pending, success, wrong_state, link_broken new state (written by PG): red, green, blue, off The idea of the com object is that A writes whichever (set of) state it thinks G is in, together with a command. (Example: suspected state="red_or_green", command: "switch_to_blue") Notice that the commands issued by A will not work if the user has switched off the LED and A needs to know this. PG will pick up such a com object and try to send the command to G, receive its answer (or a timeout) and set the operation status and new state accordingly. A will take back the oject once it is no longer at operation_pending and can react to the outcome. The com object could be separated of course (into two objects, one for each direction) but I think it is convenient in nearly all instances to have the command close to the result. I would like to have major flaws pointed out or hear an entirely different view on such a situation.

    Read the article

  • PHP shared extensions on Linux

    - by F21
    I am running Ubuntu Server 12.04 and prefer to compile PHP myself as opposed to installing it using apt-get. PHP is running as PHP-FPM. When compiling extensions, I can set it to be compiled as a shared extension using something like --with-bcmath=shared and so on. Are there any benefits to compiling the extensions as shared? I also noticed that the extensions are compiled into a pretty convoluted folder. On my system (my php prefix is /usr/local/php-5.4.9) the extensions end up in /usr/local/php-5.4.9/lib/php/extensions/no-debug-non-zts-20100525. Is there a global way to set a folder so that all shared extensions will be compiled in there? I understand that I can do something like --with-foobar=shared,/usr/local/foobar/ but having to set the extension folder for each shared extension is inefficient and error-prone.

    Read the article

  • Windows 7 shared folder status info not showing in details pane

    - by Mike
    Just upgraded my system to Windows 7 a few hours ago and noticed that there was no shared overlay icon on my shared folders anymore and after doing some reading I discovered that the icon was removed in Windows 7, stupid decision but w/eve. Anyway I have been reading that if a folder is shared then you can still see the shared status of a folder in the details pane in explorer whenever you select the folder, but this information is not available to me for some reason and I am not sure why. When I click on a shared folder in explorer the only information I see in the details pane is the folders name and the modified date/time info which is the same information I see for a folder that is not shared so my question is how do I get this information to display in the details pane like it's supposed to be.

    Read the article

  • A pseudo-listener for AlwaysOn Availability Groups for SQL Server virtual machines running in Azure

    - by MikeD
    I am involved in a project that is implementing SharePoint 2013 on virtual machines hosted in Azure. The back end data tier consists of two Azure VMs running SQL Server 2012, with the SharePoint databases contained in an AlwaysOn Availability Group. I used this "Tutorial: AlwaysOn Availability Groups in Windows Azure (GUI)" to help me implement this setup.Because Azure DHCP will not assign multiple unique IP addresses to the same VM, having an AG Listener in Azure is not currently supported.  I wanted to figure out another mechanism to support a "pseudo listener" of some sort. First, I created a CNAME (alias) record in the DNS zone with a short TTL (time to live) of 5 minutes (I may yet make this even shorter). The record represents a logical name (let's say the alias is SPSQL) of the server to connect to for the databases in the availability group (AG). When Server1 was hosting the primary replica of the AG, I would set the CNAME of SPSQL to be SERVER1. When the AG failed over to Server1, I wanted to set the CNAME to SERVER2. Seemed simple enough.(It's important to point out that the connection strings for my SharePoint services should use the CNAME alias, and not the actual server name. This whole thing falls apart otherwise.)To accomplish this, I created identical SQL Agent Jobs on Server1 and Server2, with two steps:1. Step 1: Determine if this server is hosting the primary replica.This is a TSQL step using this script:declare @agName sysname = 'AGTest'set nocount on declare @primaryReplica sysnameselect @primaryReplica = agState.primary_replicafrom sys.dm_hadr_availability_group_states agState   join sys.availability_groups ag on agstate.group_id = ag.group_id   where ag.name = @AGname if not exists(   select *    from sys.dm_hadr_availability_group_states agState   join sys.availability_groups ag on agstate.group_id = ag.group_id   where @@Servername = agstate.primary_replica    and ag.name = @AGname)begin   raiserror ('Primary replica of %s is not hosted on %s, it is hosted on %s',17,1,@Agname, @@Servername, @primaryReplica) endThis script determines if the primary replica value of the AG group is the same as the server name, which means that our server is hosting the current AG (you should update the value of the @AgName variable to the name of your AG). If this is true, I want the DNS alias to point to this server. If the current server is not hosting the primary replica, then the script raises an error. Also, if the script can't be executed because it cannot connect to the server, that also will generate an error. For the job step settings, I set the On Failure option to "Quit the job reporting success". The next step in the job will set the DNS alias to this server name, and I only want to do that if I know that it is the current primary replica, otherwise I don't want to do anything. I also include the step output in the job history so I can see the error message.Job Step 2: Update the CNAME entry in DNS with this server's name.I used a PowerShell script to accomplish this:$cname = "SPSQL.contoso.com"$query = "Select * from MicrosoftDNS_CNAMEType"$dns1 = "dc01.contoso.com"$dns2 = "dc02.contoso.com"if ((Test-Connection -ComputerName $dns1 -Count 1 -Quiet) -eq $true){    $dnsServer = $dns1}elseif ((Test-Connection -ComputerName $dns2 -Count 1 -Quiet) -eq $true) {   $dnsServer = $dns2}else{  $msg = "Unable to connect to DNS servers: " + $dns1 + ", " + $dns2   Throw $msg}$record = Get-WmiObject -Namespace "root\microsoftdns" -Query $query -ComputerName $dnsServer  | ? { $_.Ownername -match $cname }$thisServer = [System.Net.Dns]::GetHostEntry("LocalHost").HostName + "."$currentServer = $record.RecordData if ($currentServer -eq $thisServer ) {     $cname + " CNAME is up to date: " + $currentServer}else{    $cname + " CNAME is being updated to " + $thisServer + ". It was " + $currentServer    $record.RecordData = $thisServer    $record.put()}This script does a few things:finds a responsive domain controller (Test-Connection does a ping and returns a Boolean value if you specify the -Quiet parameter)makes a WMI call to the domain controller to get the current CNAME record value (Get-WmiObject)gets the FQDN of this server (GetHostEntry)checks if the CNAME record is correct and updates it if necessary(You should update the values of the variables $cname, $dns1 and $dns2 for your environment.)Since my domain controllers are also hosted in Azure VMs, either one of them could be down at any point in time, so I need to find a DC that is responsive before attempting the DNS call. The other little thing here is that the CNAME record contains the FQDN of a machine, plus it ends with a period. So the comparison of the CNAME record has to take the trailing period into account. When I tested this step, I was getting ACCESS DENIED responses from PowerShell for the Get-WmiObject cmdlet that does a remote lookup on the DC. This occurred because the SQL Agent service account was not a member of the Domain Admins group, so I decided to create a SQL Credential to store the credentials for a domain administrator account and use it as a PowerShell proxy (rather than give the service account Domain Admins membership).In SQL Management Studio, right click on the Credentials node (under the server's Security node), and choose New Credential...Then, under SQL Agent-->Proxies, right click on the PowerShell node and choose New Proxy...Finally, in the job step properties for the PowerShell step, select the new proxy in the Run As drop down.I created this two step Job on both nodes of the Availability Group, but if you had more than two nodes, just create the same job on all the servers. I set the schedule for the job to execute every minute.When the server that is hosting the primary replica is running the job, the job history looks like this:The job history on the secondary server looks like this: When a failover occurs, the SQL Agent job on the new primary replica will detect that the CNAME needs to be updated within a minute. Based on the TTL of the CNAME (which I said at the beginning was 5 minutes), the SharePoint servers will get the new alias within five minutes and should be able to reconnect. I may want to shorten up the TTL to reduce the time it takes for the client connections to use the new alias. Using a DNS CNAME and a SQL Agent Job on all servers hosting AG replicas, I was able to create a pseudo-listener to automatically change the name of the server that was hosting the primary replica, for a scenario where I cannot use a regular AG listener (in this case, because the servers are all hosted in Azure).    

    Read the article

  • Condition Variable in Shared Memory - is this code POSIX-conformant?

    - by GrahamS
    We've been trying to use a mutex and condition variable to synchronise access to named shared memory on a LynuxWorks LynxOS-SE system (POSIX-conformant). One shared memory block is called "/sync" and contains the mutex and condition variable, the other is "/data" and contains the actual data we are syncing access to. We're seeing failures from pthread_cond_signal() if both processes don't perform the mmap() calls in exactly the same order, or if one process mmaps in some other piece of shared memory before it mmaps the sync memory. This example code is about as short as I can make it: #include <sys/types.h> #include <sys/stat.h> #include <sys/mman.h> #include <sys/file.h> #include <stdlib.h> #include <pthread.h> #include <errno.h> #include <iostream> #include <string> using namespace std; static const string shm_name_sync("/sync"); static const string shm_name_data("/data"); struct shared_memory_sync { pthread_mutex_t mutex; pthread_cond_t condition; }; struct shared_memory_data { int a; int b; }; //Create 2 shared memory objects // - sync contains 2 shared synchronisation objects (mutex and condition) // - data not important void create() { // Create and map 'sync' shared memory int fd_sync = shm_open(shm_name_sync.c_str(), O_CREAT|O_RDWR, S_IRUSR|S_IWUSR); ftruncate(fd_sync, sizeof(shared_memory_sync)); void* addr_sync = mmap(0, sizeof(shared_memory_sync), PROT_READ|PROT_WRITE, MAP_SHARED, fd_sync, 0); shared_memory_sync* p_sync = static_cast<shared_memory_sync*> (addr_sync); // init the cond and mutex pthread_condattr_t cond_attr; pthread_condattr_init(&cond_attr); pthread_condattr_setpshared(&cond_attr, PTHREAD_PROCESS_SHARED); pthread_cond_init(&(p_sync->condition), &cond_attr); pthread_condattr_destroy(&cond_attr); pthread_mutexattr_t m_attr; pthread_mutexattr_init(&m_attr); pthread_mutexattr_setpshared(&m_attr, PTHREAD_PROCESS_SHARED); pthread_mutex_init(&(p_sync->mutex), &m_attr); pthread_mutexattr_destroy(&m_attr); // Create the 'data' shared memory int fd_data = shm_open(shm_name_data.c_str(), O_CREAT|O_RDWR, S_IRUSR|S_IWUSR); ftruncate(fd_data, sizeof(shared_memory_data)); void* addr_data = mmap(0, sizeof(shared_memory_data), PROT_READ|PROT_WRITE, MAP_SHARED, fd_data, 0); shared_memory_data* p_data = static_cast<shared_memory_data*> (addr_data); // Run the second process while it sleeps here. sleep(10); int res = pthread_cond_signal(&(p_sync->condition)); assert(res==0); // <--- !!!THIS ASSERT WILL FAIL ON LYNXOS!!! munmap(addr_sync, sizeof(shared_memory_sync)); shm_unlink(shm_name_sync.c_str()); munmap(addr_data, sizeof(shared_memory_data)); shm_unlink(shm_name_data.c_str()); } //Open the same 2 shared memory objects but in reverse order // - data // - sync void open() { sleep(2); int fd_data = shm_open(shm_name_data.c_str(), O_RDWR, S_IRUSR|S_IWUSR); void* addr_data = mmap(0, sizeof(shared_memory_data), PROT_READ|PROT_WRITE, MAP_SHARED, fd_data, 0); shared_memory_data* p_data = static_cast<shared_memory_data*> (addr_data); int fd_sync = shm_open(shm_name_sync.c_str(), O_RDWR, S_IRUSR|S_IWUSR); void* addr_sync = mmap(0, sizeof(shared_memory_sync), PROT_READ|PROT_WRITE, MAP_SHARED, fd_sync, 0); shared_memory_sync* p_sync = static_cast<shared_memory_sync*> (addr_sync); // Wait on the condvar pthread_mutex_lock(&(p_sync->mutex)); pthread_cond_wait(&(p_sync->condition), &(p_sync->mutex)); pthread_mutex_unlock(&(p_sync->mutex)); munmap(addr_sync, sizeof(shared_memory_sync)); munmap(addr_data, sizeof(shared_memory_data)); } int main(int argc, char** argv) { if(argc>1) { open(); } else { create(); } return (0); } Run this program with no args, then another copy with args, and the first one will fail at the assert checking the pthread_cond_signal(). But change the open() function to mmap() the "/sync" memory first and it will all work fine. This seems like a major bug in LynxOS but LynuxWorks claim that using mutex and condition variable in this way is not covered by the POSIX standard, so they are not interested. Can anyone determine if this code does violate POSIX? Or does anyone have any convincing documentation that it is POSIX compliant?

    Read the article

  • .net compliant version control system that can be installed on a shared hosting (with no SSH/root Access)

    - by Farshid
    I searched a lot in SO and other websites for a version control system that can be installed on a shared windows hosting that lets me create repositories for putting my project files on it and supply me with version control facilities but I did not find one. I looked to see whether I can install git, Mercurial or TFS in a shared hosting and I did not found any answer. I want to know if you know any system that can be installed on a shared windows hosting and please tell your recommendations if you have had an experience before.

    Read the article

  • Shared memory of same DLL in different 32 bit processes is sometimes different in a terminal session

    - by KBrusing
    We have an 32 bit application consisting of some processes. They communicate with shared memory of a DLL used by every process. Shared memory is build with global variables in C++ by "#pragma data_seg ("Shared")". When running this application sometime during starting a new process in addition to an existing (first) process we observe that the shared memory of both processes is not the same. All new started processes cannot communicate with the first process. After stopping all of our processes and restarting the application (with some processes) everything works fine. But sometime or other after successfully starting and finishing new processes the problem occurs again. Running on all other Windows versions or terminal sessions on Windows server 2003 our application never got this problem. Is there any new "feature" on Windows server 2008 that might disturb the hamony of our application?

    Read the article

  • "Shared Folders" Feature Is Not Working In VirtualBox

    - by Islam Hassan
    I have Ubuntu 11.10 as a host and another linux 2.6 distribution as a guest. When I try to setup guest additions, this error message appears Building the shared folder support module .. fail And because of that, when I run the following in terminal mount -t vboxsf shared /root/shared I get the following error message mount: unknown filesystem type 'vboxsf' Any syggestions please? EDIT Sorry, the mentioned error message isn't complete, this is it. Building the shared folder support module ...fail! (Look at /var/log/vboxadd-install.log to find out what went wrong) This is the content of vboxadd-install.log Uninstalling modules from DKMS Attempting to install using DKMS Creating symlink /var/lib/dkms/vboxguest/4.1.2/source -> /usr/src/vboxguest-4.1.2 DKMS: add Completed. Kernel preparation unnecessary for this kernel. Skipping... Building module: cleaning build area.... make KERNELRELEASE=3.2.6 -C /lib/modules/3.2.6/build M=/var/lib/dkms/vboxguest/4.1.2/build..........................(bad exit status: 2) Error! Bad return status for module build on kernel: 3.2.6 (i686) Consult the make.log in the build directory /var/lib/dkms/vboxguest/4.1.2/build/ for more information. 0 0 ERROR: binary package for vboxguest: 4.1.2 not found Failed to install using DKMS, attempting to install without make KBUILD_VERBOSE=1 -C /lib/modules/3.2.6/build SUBDIRS=/tmp/vbox.0 SRCROOT=/tmp/vbox.0 modules test -e include/generated/autoconf.h -a -e include/config/auto.conf || ( \ echo; \ echo " ERROR: Kernel configuration is invalid."; \ echo " include/generated/autoconf.h or include/config/auto.conf are missing.";\ echo " Run 'make oldconfig && make prepare' on kernel src to fix it."; \ echo; \ /bin/false) mkdir -p /tmp/vbox.0/.tmp_versions ; rm -f /tmp/vbox.0/.tmp_versions/* WARNING: Symbol version dump /usr/src/linux-source-3.2.6/Module.symvers is missing; modules will have no dependencies and modversions. Actually the log file is very large and it exceeds the 30000 characters limit. How can I upload the entire log file here?

    Read the article

  • JS closures - Passing a function to a child, how should the shared object be accessed

    - by slicedtoad
    I have a design and am wondering what the appropriate way to access variables is. I'll demonstrate with this example since I can't seem to describe it better than the title. Term is an object representing a bunch of time data (a repeating duration of time defined by a bunch of attributes) Term has some print functionality but does not implement the print functions itself, rather they are passed in as anonymous functions by the parent. This would be similar to how shaders can be passed to a renderer rather than defined by the renderer. A container (let's call it Box) has a Schedule object that can understand and use Term objects. Box creates Term objects and passes them to Schedule as required. Box also defines the print functions stored in Term. A print function usually takes an argument and uses it to return a string based on that argument and Term's internal data. Sometime the print function could also use data stored in Schedule, though. I'm calling this data shared. So, the question is, what is the best way to access this shared data. I have a lot of options since JS has closures and I'm not familiar enough to know if I should be using them or avoiding them in this case. Options: Create a local "reference" (term used lightly) to the shared data (data is not a primitive) when defining the print function by accessing the shared data through Schedule from Box. Example: var schedule = function(){ var sched = Schedule(); var t1 = Term( function(x){ // Term.print() return (x + sched.data).format(); }); }; Bind it to Term explicitly. (Pass it in Term's constructor or something). Or bind it in Sched after Box passes it. And then access it as an attribute of Term. Pass it in at the same time x is passed to the print function, (from sched). This is the most familiar way for my but it doesn't feel right given JS's closure ability. Do something weird like bind some context and arguments to print. I'm hoping the correct answer isn't purely subjective. If it is, then I guess the answer is just "do whatever works". But I feel like there are some significant differences between the approaches that could have a large impact when stretched beyond my small example.

    Read the article

  • Is sqlite3 faster than MySQL on shared hosting?

    - by Osvaldo
    Can sqlite3 be faster than MySQL on shared hosting and small to average websites (less than 500 visitors a day). I have an account in a popular shared hosting provider and I've noticed that it has become quite slow redering pages. My doubt is that this may happen because the MySQL server is overloaded. Some CMS'es work fine with SQLlite too, so I was wandering if I should use SQLite for the new sites instead of MySQL.

    Read the article

  • Integrating Simple Machines Forum in my custom membership site - login not working?

    - by Ali
    Hi guys I'm trying to integrate simple machines forum into my custom made membership website. I'm using the SMF API which is available here at http://download.simplemachines.org/?tools. The thing is that its working on my localhost testing server - however on my online server where the system is hosted - its not working. I have set it up so that when the user log into my own custom CMS membership site, he/she is logged in automatically to his/her corresponding account on the forum. However it works on my localhost but online its not working at all.. I log into my site and then browse to the forum to find out I havent been logged in there :( - I think its not creating the cookies or registering the session.. where should I look here. Please do help.

    Read the article

  • Do Ruby/Rails state machines exist that execute event transitions when a state change occurs?

    - by Bryan
    Hello All, Hopefully this isn't a silly question and I'm just not overlooking something in Ruby/Rails state machines (AASM, Transitions, AlterEgo, etc). From what I can tell, these state machine implementations operate on the preface that an event will get fired and the appropriate transition for that event will be triggered based on the old and new state. However, they don't seem to work the other way; say a user wants to change state from 'created' to 'assigned' and have the correct transition occur (rather than firing the event that causes the current state to be transitioned to the new state). Essentially, I want the user to be able to select a new state from a select box of available states and have the appropriate transition, guards, success callbacks, etc executed. Does anyone know if the existing state machine implementations support this?

    Read the article

  • How to create a virtual network with Azure Connect

    - by Herve Roggero
    If you are trying to establish a virtual network between machines located in disparate networks, you can either use VPN, Virtual Network or Azure Connect. If you want to establish a connection between machines located in Windows Azure, you should consider using the Virtual Network service. If you want to establish a connection between local machines and Virtual Machines in Windows Azure, you may be able to use your existing VPN device (assuming you have one), as long as the device is supported by Microsoft. If the VPN device you are using isn’t supported, or if you are trying to create a virtual network between machines from disparate networks (such as machines located in another cloud provider), you can use Azure Connect. This blog post explains how Azure Connect can help you create virtual networks between multiple servers in the cloud, various servers in different cloud environments, and on-premise. Note: Azure Connect is currently in Technical Preview. About Azure Connect Let’s do a quick review of Azure Connect. This technology implements an IPSec tunnel from machines to to a relay service located in the Microsoft cloud (Azure). So in essence, Azure Connect doesn’t provide a point-to-point connection between machines; the network communication is tunneled through the relay service. The relay service in turn offers a mechanism to enforce basic communication rules that you define through Groups. We will review this later. You could network two or more VMs in the Azure cloud (although you should consider using a Virtual Network if you go this route), or servers in the Azure cloud and other machines in the Amazon cloud for example, or even two or more on-premise servers located in different locations for which a direct network connection is not an option. You can place any number of machines in your topology. Azure Connect gives you great flexibility on how you want to build your virtual network across various environments. So Azure Connect makes sense when you want to: Connect machines located in different cloud providers Connect on-premise machines running in different locations Connect Azure VMs with on-premise (if you do not have a VPN device, or if your device is not supported) Connect Azure Roles (Worker Roles, Web Roles) with on-premise servers or in other cloud providers The diagram below shows you a high level network topology that involves machines in the Windows Azure cloud, other cloud providers and on-premise. You should note that the only required component in this diagram is the Relay itself. The other machines are optional (although your network is useful only if you have two or more machines involved). Relay agents are currently available in three geographic areas: US, Europe and Asia. You can change which region you want to use in the Windows Azure management portal. High Level Network Topology With Azure Connect Azure Connect Agent Azure Connect establishes a virtual network and creates virtual adapters on your machines; these virtual adapters communicate through the Relay using IPSec. This is achieved by installing an agent (the Azure Connect Agent) on all the machines you want in your network topology. However, you do not need to install the agent on Worker Roles and Web Roles; that’s because the agent is already installed for you. Any other machine, including Virtual Machines in Windows Azure, needs the agent installed.  To install the agent, simply go to your Windows Azure portal (http://windows.azure.com) and click on Networks on the bottom left panel. You will see a list of subscriptions under Connect. If you select a subscription, you will be able to click on the Install Local Endpoint icon on top. Clicking on this icon will begin the download and installation process for the agent. Activating Roles for Azure Connect As previously mentioned, you do not need to install the Azure Connect Agent on Worker Roles and Web Roles because it is already loaded. However, you do need to activate them if you want the roles to participate in your network topology. To do this, you will need to click on the Get Activation Token icon. The activation token must then be copied and placed in the configuration file of your roles. For more information on how to perform this step, visit MSDN at http://msdn.microsoft.com/en-us/library/windowsazure/gg432964.aspx. Firewall Rules Note that specific firewall rules must exist to allow the agent to communicate through the Relay. You will need to allow TCP 443 and ICMPv6. For additional information, please visit MSDN at http://msdn.microsoft.com/en-us/library/windowsazure/gg433061.aspx. CA Certificates You can optionally require agents to sign their activation request with the Relay using a trusted certificate issued by a Certificate Authority (CA). Click on Activation Options to learn more. Groups To create your network topology you must first create a group. A group represents a logical container of endpoints (or machines) that can communicate through the Relay. You can create multiple groups allowing you to manage network communication differently. For example you could create a DEVELOPMENT group and a PRODUCTION group. To add an endpoint you must first install an agent that will create a virtual adapter on the machine on which it is installed (as discussed in the previous section). Once you have created a group and installed the agents, the machines will appear in the Windows Azure management portal and you can start assigning machines to groups. The next figure shows you that I created a group called LocalGroup and assigned two machines (both on-premise) to that group. Groups and Computers in Azure Connect As I mentioned previously you can allow these machines to establish a network connection. To do this, you must enable the Interconnected option in the group. The following diagram shows you the definition of the group. In this topology I chose to include local machines only, but I could also add worker roles and web roles in the Azure Roles section (you must first activate your roles, as discussed previously). You could also add other Groups, allowing you to manage inter-group communication. Defining a Group in Azure Connect Testing the Connection Now that my agents have been installed on my two machines, the group defined and the Interconnected option checked, I can test the connection between my machines. The next screenshot shows you that I sent a PING request to DEVLAP02 from DEVDSK02. The PING request was successful. Note however that the time is in the hundreds of milliseconds on average. That is to be expected because the machines are connecting through the Relay located in the cloud. Going through the Relay introduces an extra hop in the communication chain, so if your systems rely on high performance, you may want to conduct some basic performance tests. Sending a PING Request Through The Relay Conclusion As you can see, creating a network topology between machines using the Azure Connect service is simple. It took me less than five minutes to create the above configuration, including the time it took to install the Azure Connect agents on the two machines. The flexibility of Azure Connect allows you to create a virtual network between disparate environments, as long as your operating systems are supported by the agent. For more information on Azure Connect, visit the MSDN website at http://msdn.microsoft.com/en-us/library/windowsazure/gg432997.aspx. About Herve Roggero Herve Roggero, Windows Azure MVP, is the founder of Blue Syntax Consulting, a company specialized in cloud computing products and services. Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" from Apress and runs the Azure Florida Association (on LinkedIn: http://www.linkedin.com/groups?gid=4177626). For more information on Blue Syntax Consulting, visit www.bluesyntax.net. Special Thanks I would like thank those that helped me figure out how Azure Connect works: Marcel Meijer - http://blogs.msmvps.com/marcelmeijer/ Michael Wood - Http://www.mvwood.com Glenn Block - http://www.codebetter.com/glennblock Yves Goeleven - http://cloudshaper.wordpress.com/ Sandrino Di Mattia - http://fabriccontroller.net/ Mike Martin - http://techmike2kx.wordpress.com

    Read the article

  • Proxmox drbd configuration split brain [on hold]

    - by AudioDan
    I am planning a proxmox HA configuration with two Dell R710 machines (dual 6 core processors in each) with enterprise level drive raid arrays. I would be using DRBD with a quorum disk on a third machine. I would dedicate two 1GB nics on each server to the DRBD communications. We would have approximately 12 to 14 Virtual Machines running on this pair of servers. The proxmox manual recommends creating two DRBD resources - one for the Virtual Machines that normally run on ServerA and one for the Virtual Machines that normally run on ServerB. This is because of the Primary/Primary state in which this configuration runs. If both servers have VMs talking to the same DRBD resource and a split brain situation occurs, there is potential for data corruption that must be resolved. While I understand it would take more effort to create new virtual machines, can anybody foresee any potential problems with running a separate DRBD resource for each VM instead? Does anyone have experience running a setup that way and has it worked well? It seems to me that would allow more flexibility in moving machines back and forth.

    Read the article

  • How do I send a newsletter to 2000 emails on a shared server?

    - by Bogdan
    Consider this scenario: I'm running Magento 1.4 Community Edition I have a newsletter that I'm sending out 2 times per week My list has 2000 subscribers I'm on a shared hosting plan (linux on apache 2.2 with Cpanel) My hosting provider limits me to 6 emails/minute How can I send 2000 emails x 2 times/week x 4 weeks/month in the above scenario? Is there a server configuration I can ask my hosting company to make? Any software that can temporize the email send?

    Read the article

  • Shared Database Servers

    - by shivanshu.upadhyay
    As more enterprises consolidate their database environments to support private cloud initiatives, ISVs will have to deal with sceanrios where they need to run on a shared powerful database server like Exadata. Some ISVs are concerned about meeting SLAs for performance in a shared environment. Outside the virtualization world, there are capabilities of Oracle Database which can be used to prevent resource contention and guarantee SLA. These capabilities are - 1) Instance Caging - This guarantees the CPU allocated or limits the maximum number of CPUs (and so the number of Oracle processes) that an instance of Database can use simultaneously. With this feature, ISVs can be assured that their application is allocated adequate CPUs even if the database server is shared with other applications. 2) CPU Resource Allocation with Database Resource Manager - This allocates percentages of CPU time to different users and applications within a database. ISVs can use this feature to ensure that priority user or workloads within their application get CPU resources over other requirements. 3) Exadata I/O Resource Manager - The Database Resource Manager feature in Oracle Database 11g has been enhanced for use with Exadata. This allows the sharing of storage between databases without fear of one database monopolizing the I/O bandwidth and impacting the performance of the other databases sharing the storage. This can be used to ensure that I/O does not become a performance bottleneck due to poor design of other applications sharing the same server.

    Read the article

  • How to fix Java problem installing Matlab 2012a (64-bit) in Ubuntu 12.04 (64 bit)?

    - by Sabyasachi
    I am trying to install Matlab 2012a (64-bit) in Ubuntu 12.04LTS (64-bit). I have installed Java 7. My Java version is: sabyasachi@sabyasachi-ubuntu:~/Downloads/R2012a_UNIX$ java -version java version "1.7.0_05" Java(TM) SE Runtime Environment (build 1.7.0_05-b05) Java HotSpot(TM) 64-Bit Server VM (build 23.1-b03, mixed mode I am getting the following error while installing Matlab: sabyasachi@sabyasachi-ubuntu:~/Downloads/R2012a_UNIX$ ./install Preparing installation files ... Installing ... /tmp/mathworks_18824/sys/java/jre/glnxa64/jre/bin/java: error while loading shared libraries: libjli.so: cannot open shared object file: No such file or directory Finished How can I fix this problem? When I use -v (verbose) option I am getting the following: sabyasachi@sabyasachi-ubuntu:~/Downloads/R2012a_UNIX$ sudo ./install -v Preparing installation files ... -> DVD = /home/sabyasachi/Downloads/R2012a_UNIX -> ARCH = glnxa64 -> DISPLAY = :0 -> TESTONLY = 0 -> JRE_LOC = /tmp/mathworks_26521/sys/java/jre/glnxa64/jre -> LD_LIBRARY_PATH = /tmp/mathworks_26521/bin/glnxa64 Command to run: /tmp/mathworks_26521/sys/java/jre/glnxa64/jre/bin/java -splash:"/home/sabyasachi/Downloads/R2012a_UNIX/java/splash.png" -Djava.ext.dirs=/tmp/mathworks_26521/sys/java/jre/glnxa64/jre/lib/ext:/tmp/mathworks_26521/java/jar:/tmp/mathworks_26521/java/jarext:/tmp/mathworks_26521/java/jarext/axis2/:/tmp/mathworks_26521/java/jarext/guice/:/tmp/mathworks_26521/java/jarext/webservices/ com/mathworks/professionalinstaller/Launcher -root "/home/sabyasachi/Downloads/R2012a_UNIX" -tmpdir "/tmp/mathworks_26521" Installing ... /tmp/mathworks_26521/sys/java/jre/glnxa64/jre/bin/java: error while loading shared libraries: libjli.so: cannot open shared object file: No such file or directory Finished sabyasachi@sabyasachi-ubuntu:~/Downloads/R2012a_UNIX$

    Read the article

  • SCCM Report to identify machines with 64-bit capable hardware

    - by GAThrawn
    Currently looking at deployment options for Windows 7. One of the questions we're looking into is 32 bit vs 63 bit. I'm trying to run a SCCM report against our estate to identify which machines are 64-bit capable (whether or not they're currently running a 64-bit OS). There seem to be a few resources out on the net for this (here, here and here) but none of them seem to work right on machines running 32-bit Windows XP. 32-bit XP machines seem to always report that they're running on 32-bit hardware. The query I'm currently running is: select sys.netbios_name0, sys.Operating_System_Name_and0 as OperatingSystem, case when pr.addresswidth0=64 then '64bit OS' when pr.addresswidth0=32 then '32bit OS' end as [Operating System Type], case when pr.DataWidth0=64 then '64bit Processor' when pr.DataWidth0=32 then '32bit Processor' end as [Processor Type], case when pr.addresswidth0=32 and pr.DataWidth0=64 then 'YES' end as [32-bit OS on x64 processor] from v_r_system sys join v_gs_processor pr on sys.resourceid=pr.resourceid I've also tried this, which reports all "Windows XP Professional" systems are on "X86-based PC", not x64 based even though a number of them definitely are: select OS.Caption0, CS.SystemType0, Count(*) from dbo.v_GS_COMPUTER_SYSTEM CS Left Outer Join dbo.v_GS_OPERATING_SYSTEM OS on CS.ResourceID = OS.ResourceId Group by OS.Caption0, CS.SystemType0 Order by OS.Caption0, CS.SystemType0 For instance we have a set of Dell Latitude E4200 laptops. Some of these are running 32-bit Windows XP SP3, some of them are running 32-bit Windows 7, some are running 64-bit Windows 7. All the laptops are identical, having come from the same order. Out of these the Windows 7 (32 and 64-bit) report that the hardware is 64-bit capable, and the Windows XP machines report that they're only 32-bit capable. Does anyone know if there's another value I can query to get the hardware's capabilities correctly on XP, or is there a hotfix that will get it reporting the correct info?

    Read the article

  • What is the risk of introducing non standard image machines to a corporate environment

    - by Troy Hunt
    I’m after some feedback from those in the managed desktop or network security space on the risks of introducing machines that are not built on a standard desktop image into a large corporate environment. This particular context relates to the standard corporate image (32 bit Win XP) in a large multi-national not being suitable for a particular segment of users. In short, I’m looking at what hurdles we might come across by proposing the introduction of machines which are built and maintained by a handful of software developers and not based on the corporate desktop image (proposing 64 bit Win 7). I suspect the barriers are primarily around virus definition updates, the rollout of service packs and patches and the compatibility of existing applications with the newer OS. In terms of viruses and software updates, if machines were using common virus protection software with automated updates and using Windows Update for service packs and patches, is there still a viable risk to the corporate environment? For that matter, are large corporate environments normally vulnerable to the introduction of a machine not based on a standard image? I’m trying to get my head around how real the risk of infection and other adverse events are from machines being plugged into the network. There are multiple scenarios outside of just the example above where this might happen (i.e. a vendor plugging in a machine for internet access during a presentation). Would a large corporate network normally be sufficiently hardened against such innocuous activity? I appreciate the theory as to why policies such as standard desktop images exist, I’m just interested in the actual, practical risk and how much a network should be protected by means other than what is managed on individual PCs.

    Read the article

  • How To Configure Remote Desktop To Hyper-V Guest Virtual Machines

    - by Brian Jackett
    Configuring Remote Desktop (RDP) from a host Hyper-V machine to a guest virtual machine can be tricky, so this post is dedicated to the issues and resolution steps I went through to allow RDP.  Cutting to the point, below are the things to look for followed by some explanation about my scenario if you care to read.  This is not an exhaustive list of what is required, just the items that were causing problems for my particular scenario. Requirements Allow Remote Desktop Connections in guest OS. The network adapter type must allow communication with host machine (e.g. use an “Internal” virtual adapter.) If running Server 2008 R2 on guest, network discovery mode must be turned on. If running Server 2008 R2 on guest, the services supporting network discovery mode must be running: - DNS Client - Function Discovery Resource Publication - SSDP Discovery - UPnP Device Host My Environment     A quick word about my environment.  I am running Windows Server 2008 R2 with Hyper V on my laptop and numerous guest VMs running Windows Server 2003 R2 or Windows Server 2008 R2.  I run a domain controller VM and then 1 or 2 SharePoint servers depending on my work needs.  I’ve found this setup to work well except when it comes to the display window for my VMs. The Issue     Ever since I began running Hyper-V I haven’t been able to RDP to my guest VMs which means the resolution for my connection windows ha been limited to what the native Hyper-V connections allow.  During personal use I can put the resolution up to 1152 x 864, but during presentations I am usually limited to a measly 800 x 600.  That is until today when I decided to fully investigate why I couldn’t connect via RDP.     First a thank you to John Ross (@johnrossjr), Christina Wheeler (@cwheeler76) and Clayton Cobb (@warrtalon) for various suggestions while I was researching tonight.  As it turns out I had not 1, not 2, but 3 items preventing me from using RDP.  Let’s dig into the requirements above. Allow RDP Connection     This item I had previously taken care of, but it bears repeating because by default Windows Server 2008 R2 does not allow RDP connections.  Change the setting from “Don’t allow…” to whichever “Allow connections…” setting suits your needs.  I chose the less secure option as this is just my dev laptop. Network Adapter Type     When I originally configured my VMs I configured each to use 2 network adapters: one using the physical ethernet adapter for internet use and a virtual private adapter for communication between the VMs.  The connection for the ethernet adapter is an "”External” adapter and thus doesn’t connect between the host and guest.  The virtual private adapter allowed communication ONLY between the VMs and not to my host.  There is a third option “Internal” which allows communication between VMs as well as to the host.  After finding out this distinction I promptly created an Internal network adapter and assigned that to my VMs. Turn On Network Discovery     Seems like a pretty common sense thing, but in order to allow remote desktop connections the target computer must able to be found by the source computer (explained here.)  One of the settings that controls if a computer can be found on the network is aptly named Network Discovery.  By default Windows Server 2008 R2 turns Network Discovery off for security purposes.  To enable it open up the Network and Sharing Center.  Click “Change Advanced Sharing Settings” on the left.  On the following screen select “Turn on network discovery” for the currently used profile and click Save Settings.  You may notice though that your selection to turn on network discovery doesn’t save.  If this is the case then you most likely don’t have the supporting services running (as was my case.) Network Discovery Supporting Services     There are a total of 4 services (listed again below) that need to be running before you can turn on network discovery (explained here.)  The below images highlight these services.  In my guest VM I found that I had DNS Client already running while the other 3 were disabled.  I set them all to enabled and started the ones that were stopped.  After this change I returned to the Sharing settings screen and found that Network Discovery was turned on.  I’m not sure whether this was picking up my attempt to turn it on previously or if starting those services turned it on.  Either way the end result was a success. - DNS Client - Function Discovery Resource Publication - SSDP Discovery - UPnP Device Host Before and After Results     The first image is the smaller square shaped viewing window used by the Hyper-V native connection.  The second is the full-screen RDP connection in all its widescreen glory. Conclusion     Over the past few months I’ve found Hyper-V to be very useful for virtualizing my development environments, but I’ve also had a steep learning curve to get various items configured just right.  Allowing RDP connections to guest VMs was one area that I hadn’t been able to get right for the longest time.  Now that I resolved these issues I hope that others can avoid the pitfalls that I ran into.  If you know of any other items I left off feel free to let me know.        -Frog Out   Links Turning on Network Discovery http://sqlblog.com/blogs/john_paul_cook/archive/2009/08/15/remote-desktop-connection-on-windows-server-2008-r2.aspx Services required for Network Discovery http://social.technet.microsoft.com/Forums/en-US/winservergen/thread/2e1fea01-3f2b-4c46-a631-a8db34ed4f84

    Read the article

  • Thoughts on Development using Virtual Machines

    - by J_A_X
    I'll be working as a development lead for a startup and I've suggested that we use VMs for development. I'm not talking about each developer having a desktop with VMs for testing/development, I mean having a server rack where all VMs are managed and have the developers work from a microPC (ChromeOS anyone?) locally, or even remotely from their home computer. To me, the benefits are the fact that it's extremely scalable, cheaper in the long run, easier to manage and that we utilize the hardware its maximum potential. As for cons, I can't think of any particular showstoppers other than we'll need someone to setup/maintain said setup. I was hoping that some of you might of had a similar setup at your place of employment and be able to weight in with your opinions. Thanks.

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >