Search Results

Search found 11930 results on 478 pages for 'shared machines'.

Page 73/478 | < Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >

  • No video playback when using home sharing on iTunes for Windows

    - by Diago
    I recently configured iTunes Home Sharing on my home network. My WHS server is the central library sharing both my Music and Video files. All my videos are encoded into H.264/MP4 format. All my machines are authorized and has access to Home Sharing. All my machines are running the latest version of iTunes 9. On my Snow Leopard machines iTunes happily plays the videos and music with no issues. On the Windows 7 machines however the play button fades for about 2 seconds when selecting a shared video and then nothing happens. The Windows machines are running the latest Community Codec Pack. Music sharing works perfectly on the Windows machines. When accessing the videos through the native WHS media connect sharing as well as through the file share I can play them perfectly on the Windows 7 machines. When adding the file to the iTunes library on the Windows 7 machine it also plays perfectly. Any advice, ideas or suggestions as to how to play the videos I have shared through Home Sharing on Windows 7?

    Read the article

  • Setting up a Pagefile and Partition in Server 2008

    - by Brett Powell
    I am setting up 18 new machines for our company, and I have instructions from my new boss on setting up a Pagefile and Partition. I have looked at their existing machines to base the new setups off of, but there is no consistency between any 2 machines, which has left me extremely frustrated to say the least. My instructions are... 1) Set a static pagefile (use recommended value as max/min), set it on SSD if SSD available. 2) Make 3 partitions: C: is used for OS and install files D: is used for backups on machines with a SSD. On machines without SSD create a D: partition for pagefile (2*installed RAM for partition size) E: must be the partition hosting user files I have never messed with Pagefiles before, and looking at their existing machines is offering no help. My questions are... 1) As the machines I am setting up have no SSD (just 2 SATA drives) does it sound like the Pagefile should be setup on the C: (primary) drive or the D:? The instructions are vague so I have no idea. 2) As C: and D: are both Physical drives, does it sound like C: should be partitioned out to create the E: drive or D:? Thanks for any help I can get. I am extremely stressed out under a massive workload right now, and these vague instructions are quite infuriating.

    Read the article

  • nxclient crashes when trying to open a terminal from a remote client through "ssh -Y"

    - by user167328
    I support around 150 linux machines. I have 2 virtual machines on an ESXi server which I access via nxmachine v3 from a windows 7 box. These machines run CentOS5 with KDE and Lubuntu12.04.1 and they are the admin GUIs from which I support the 150 machines. The linux machines which I manage are redhat4/5, CentOS5 and ubuntu 10 and 12. Normally I contact the machines via ssh -Y. Today I did an ssh -Y to a remote machine which is running Ubuntu 12.10 and ssh 6.0p1. Then I tried to open an lxterminal on the remote machine which should display on my KDE desktop. This immediately and reproducably crashed my nxclient session. I tried again from my lubuntu system with the same effect. I have not observed the phenomenon from other machines yet. The message log on my KDE host shows: Unexpected termination of nxagent because of signal: 11 Logger::log nxnode 3920 Googling for this revealed no usable answer. Does anybody have a clue what is going on here or can give a hint how to solve the issue? Add On: I asked the user at the remote machine to export his DISPLAY to my host and open an lxterminal. This worked without problems i. e. the nxclient did not crash. Then the user tried to send me xeyes and this also killed the nxclient with the same error message found in the message log as above. This makes me suspect that the problem is not solely connected to ssh but maybe to some library stuff.

    Read the article

  • Setting up squid proxy server to in turn connect using another proxy server [closed]

    - by AnkurVj
    My institute uses the Squid proxy server and authentication mechanism requires username and password to be entered. This means that, I can log in on only one machine at a time and Internet access for me is restricted to that machine. I sometimes require Internet access on multiple machines simultaneously. What previosuly worked for me was the following : On one of my own machines A, I set up a Squid proxy server that allowed all local machines without any username and password. I configured rest of the machines to use this machine A as the proxy server. On machine A I logged into the institute proxy server using my browser. This way, I could access Internet from all my machines, by effectively channeling my requests through the server A. Recently, I lost that machine and configuration and now I tried to set it up again in the same manner. However, I cant seem to remember exactly how I made it work. I keep getting Connection Refused (111) on other machines. My guess is that my squid server isnt able to forward requests from other machines to the actual squid server. I could use any help for debugging this problem. I don't want to use alternatives such as ssh tunneling. This solution has worked for me in the past, I just don't remember how to set it up the same way again.

    Read the article

  • Mercurial confusion - commit / push, backouts

    - by Madmanguruman
    I'm trying to set up a repository on a shared filesystem. I'm using Mercurial 2.1.2 on a Windows-based architecture. I start with an empty folder on the shared filesystem and create a repository in it. After this, I dump in the baseline files, and add them to versioning, then commit the changes. I then clone the repository to my local hard drive. I then make a change in my local repository, commit it, then push back to the shared filesystem repository. The shared repo graph I get in TortoiseHG looks strange (to me). This is the shared repo: This is the local repo: On the shared repo, the working directory always shows up on the top, then the graph goes 'down' to rev. 0 then back 'up' again through various revisions. It looks to me like I have two different branches, even though everything is on the default branch. Also, that 'top' revision always says "* Working Directory * Not a head revision!" I noticed that in my local repository, I don't get that dangling working directory at the top of the list - everything is in one branch. I also noticed that on my local repository, I can back out the tip revision with no problem. On the shared filesystem repository, I cannot, since I get an error ("Cannot backout change on a different branch"). How can this be? Aren't they supposed to be identical to each other? Am I fundamentally doing something wrong?

    Read the article

  • Windows Azure Virtual Machine Readiness and Capacity Assessment for SQL Server

    - by SQLOS Team
    Windows Azure Virtual Machine Readiness and Capacity Assessment for Windows Server Machine Running SQL Server With the release of MAP Toolkit 8.0 Beta, we have added a new scenario to assess your Windows Azure Virtual Machine Readiness. The MAP 8.0 Beta performs a comprehensive assessment of Windows Servers running SQL Server to determine you level of readiness to migrate an on-premise physical or virtual machine to Windows Azure Virtual Machines. The MAP Toolkit then offers suggested changes to prepare the machines for migration, such as upgrading the operating system or SQL Server. MAP Toolkit 8.0 Beta is available for download here Your participation and feedback is very important to make the MAP Toolkit work better for you. We encourage you to participate in the beta program and provide your feedback at [email protected] or through one of our surveys. Now, let’s walk through the MAP Toolkit task for completing the Windows Azure Virtual Machine assessment and capacity planning. The tasks include the following: Perform an inventory View the Windows Azure VM Readiness results and report Collect performance data for determine VM sizing View the Windows Azure Capacity results and report Perform an inventory: 1. To perform an inventory against a single machine or across a complete environment, choose Perform an Inventory to launch the Inventory and Assessment Wizard as shown below: 2. After the Inventory and Assessment Wizard launches, select either the Windows computers or SQL Server scenario to inventory Windows machines. HINT: If you don’t care about completely inventorying a machine, just select the SQL Server scenario. Click Next to Continue. 3. On the Discovery Methods page, select how you want to discover computers and then click Next to continue. Description of Discovery Methods: Use Active Directory Domain Services -- This method allows you to query a domain controller via the Lightweight Directory Access Protocol (LDAP) and select computers in all or specific domains, containers, or OUs. Use this method if all computers and devices are in AD DS. Windows networking protocols --  This method uses the WIN32 LAN Manager application programming interfaces to query the Computer Browser service for computers in workgroups and Windows NT 4.0–based domains. If the computers on the network are not joined to an Active Directory domain, use only the Windows networking protocols option to find computers. System Center Configuration Manager (SCCM) -- This method enables you to inventory computers managed by System Center Configuration Manager (SCCM). You need to provide credentials to the System Center Configuration Manager server in order to inventory the managed computers. When you select this option, the MAP Toolkit will query SCCM for a list of computers and then MAP will connect to these computers. Scan an IP address range -- This method allows you to specify the starting address and ending address of an IP address range. The wizard will then scan all IP addresses in the range and inventory only those computers. Note: This option can perform poorly, if many IP addresses aren’t being used within the range. Manually enter computer names and credentials -- Use this method if you want to inventory a small number of specific computers. Import computer names from a files -- Using this method, you can create a text file with a list of computer names that will be inventoried. 4. On the All Computers Credentials page, enter the accounts that have administrator rights to connect to the discovered machines. This does not need to a domain account, but needs to be a local administrator. I have entered my domain account that is an administrator on my local machine. Click Next after one or more accounts have been added. NOTE: The MAP Toolkit primarily uses Windows Management Instrumentation (WMI) to collect hardware, device, and software information from the remote computers. In order for the MAP Toolkit to successfully connect and inventory computers in your environment, you have to configure your machines to inventory through WMI and also allow your firewall to enable remote access through WMI. The MAP Toolkit also requires remote registry access for certain assessments. In addition to enabling WMI, you need accounts with administrative privileges to access desktops and servers in your environment. 5. On the Credentials Order page, select the order in which want the MAP Toolkit to connect to the machine and SQL Server. Generally just accept the defaults and click Next. 6. On the Enter Computers Manually page, click Create to pull up at dialog to enter one or more computer names. 7. On the Summary page confirm your settings and then click Finish. After clicking Finish the inventory process will start, as shown below: Windows Azure Readiness results and report After the inventory progress has completed, you can review the results under the Database scenario. On the tile, you will see the number of Windows Server machine with SQL Server that were analyzed, the number of machines that are ready to move without changes and the number of machines that require further changes. If you click this Azure VM Readiness tile, you will see additional details and can generate the Windows Azure VM Readiness Report. After the report is generated, select View | Saved Reports and Proposals to view the location of the report. Open up WindowsAzureVMReadiness* report in Excel. On the Windows tab, you can see the results of the assessment. This report has a column for the Operating System and SQL Server assessment and provides a recommendation on how to resolve, if there a component is not supported. Collect Performance Data Launch the Performance Wizard to collect performance information for the Windows Server machines that you would like the MAP Toolkit to suggest a Windows Azure VM size for. Windows Azure Capacity results and report After the performance metrics are collected, the Azure VM Capacity title will display the number of Virtual Machine sizes that are suggested for the Windows Server and Linux machines that were analyzed. You can then click on the Azure VM Capacity tile to see the capacity details and generate the Windows Azure VM Capacity Report. Within this report, you can view the performance data that was collected and the Virtual Machine sizes.   MAP Toolkit 8.0 Beta is available for download here Your participation and feedback is very important to make the MAP Toolkit work better for you. We encourage you to participate in the beta program and provide your feedback at [email protected] or through one of our surveys. Useful References: Windows Azure Homepage How to guides for Windows Azure Virtual Machines Provisioning a SQL Server Virtual Machine on Windows Azure Windows Azure Pricing     Peter Saddow Senior Program Manager – MAP Toolkit Team

    Read the article

  • Windows Azure Virtual Machine Readiness and Capacity Assessment for SQL Server

    - by SQLOS Team
    Windows Azure Virtual Machine Readiness and Capacity Assessment for Windows Server Machine Running SQL Server With the release of MAP Toolkit 8.0 Beta, we have added a new scenario to assess your Windows Azure Virtual Machine Readiness. The MAP 8.0 Beta performs a comprehensive assessment of Windows Servers running SQL Server to determine you level of readiness to migrate an on-premise physical or virtual machine to Windows Azure Virtual Machines. The MAP Toolkit then offers suggested changes to prepare the machines for migration, such as upgrading the operating system or SQL Server. MAP Toolkit 8.0 Beta is available for download here Your participation and feedback is very important to make the MAP Toolkit work better for you. We encourage you to participate in the beta program and provide your feedback at [email protected] or through one of our surveys. Now, let’s walk through the MAP Toolkit task for completing the Windows Azure Virtual Machine assessment and capacity planning. The tasks include the following: Perform an inventory View the Windows Azure VM Readiness results and report Collect performance data for determine VM sizing View the Windows Azure Capacity results and report Perform an inventory: 1. To perform an inventory against a single machine or across a complete environment, choose Perform an Inventory to launch the Inventory and Assessment Wizard as shown below: 2. After the Inventory and Assessment Wizard launches, select either the Windows computers or SQL Server scenario to inventory Windows machines. HINT: If you don’t care about completely inventorying a machine, just select the SQL Server scenario. Click Next to Continue. 3. On the Discovery Methods page, select how you want to discover computers and then click Next to continue. Description of Discovery Methods: Use Active Directory Domain Services -- This method allows you to query a domain controller via the Lightweight Directory Access Protocol (LDAP) and select computers in all or specific domains, containers, or OUs. Use this method if all computers and devices are in AD DS. Windows networking protocols --  This method uses the WIN32 LAN Manager application programming interfaces to query the Computer Browser service for computers in workgroups and Windows NT 4.0–based domains. If the computers on the network are not joined to an Active Directory domain, use only the Windows networking protocols option to find computers. System Center Configuration Manager (SCCM) -- This method enables you to inventory computers managed by System Center Configuration Manager (SCCM). You need to provide credentials to the System Center Configuration Manager server in order to inventory the managed computers. When you select this option, the MAP Toolkit will query SCCM for a list of computers and then MAP will connect to these computers. Scan an IP address range -- This method allows you to specify the starting address and ending address of an IP address range. The wizard will then scan all IP addresses in the range and inventory only those computers. Note: This option can perform poorly, if many IP addresses aren’t being used within the range. Manually enter computer names and credentials -- Use this method if you want to inventory a small number of specific computers. Import computer names from a files -- Using this method, you can create a text file with a list of computer names that will be inventoried. 4. On the All Computers Credentials page, enter the accounts that have administrator rights to connect to the discovered machines. This does not need to a domain account, but needs to be a local administrator. I have entered my domain account that is an administrator on my local machine. Click Next after one or more accounts have been added. NOTE: The MAP Toolkit primarily uses Windows Management Instrumentation (WMI) to collect hardware, device, and software information from the remote computers. In order for the MAP Toolkit to successfully connect and inventory computers in your environment, you have to configure your machines to inventory through WMI and also allow your firewall to enable remote access through WMI. The MAP Toolkit also requires remote registry access for certain assessments. In addition to enabling WMI, you need accounts with administrative privileges to access desktops and servers in your environment. 5. On the Credentials Order page, select the order in which want the MAP Toolkit to connect to the machine and SQL Server. Generally just accept the defaults and click Next. 6. On the Enter Computers Manually page, click Create to pull up at dialog to enter one or more computer names. 7. On the Summary page confirm your settings and then click Finish. After clicking Finish the inventory process will start, as shown below: Windows Azure Readiness results and report After the inventory progress has completed, you can review the results under the Database scenario. On the tile, you will see the number of Windows Server machine with SQL Server that were analyzed, the number of machines that are ready to move without changes and the number of machines that require further changes. If you click this Azure VM Readiness tile, you will see additional details and can generate the Windows Azure VM Readiness Report. After the report is generated, select View | Saved Reports and Proposals to view the location of the report. Open up WindowsAzureVMReadiness* report in Excel. On the Windows tab, you can see the results of the assessment. This report has a column for the Operating System and SQL Server assessment and provides a recommendation on how to resolve, if there a component is not supported. Collect Performance Data Launch the Performance Wizard to collect performance information for the Windows Server machines that you would like the MAP Toolkit to suggest a Windows Azure VM size for. Windows Azure Capacity results and report After the performance metrics are collected, the Azure VM Capacity title will display the number of Virtual Machine sizes that are suggested for the Windows Server and Linux machines that were analyzed. You can then click on the Azure VM Capacity tile to see the capacity details and generate the Windows Azure VM Capacity Report. Within this report, you can view the performance data that was collected and the Virtual Machine sizes.   MAP Toolkit 8.0 Beta is available for download here Your participation and feedback is very important to make the MAP Toolkit work better for you. We encourage you to participate in the beta program and provide your feedback at [email protected] or through one of our surveys. Useful References: Windows Azure Homepage How to guides for Windows Azure Virtual Machines Provisioning a SQL Server Virtual Machine on Windows Azure Windows Azure Pricing     Peter Saddow Senior Program Manager – MAP Toolkit Team

    Read the article

  • Slicing the EDG

    - by Antony Reynolds
    Different SOA Domain Configurations In this blog entry I would like to introduce three different configurations for a SOA environment.  I have omitted load balancers and OTD/OHS as they introduce a whole new round of discussion.  For each possible deployment architecture I have identified some of the advantages. Super Domain This is a single EDG style domain for everything needed for SOA/OSB.   It extends the standard EDG slightly but otherwise assumes a single “super” domain. This is basically the SOA EDG.  I have broken out JMS servers and Coherence servers to improve scalability and reduce dependencies. Key Points Separate JMS allows those servers to be kept up separately from rest of SOA Domain, allowing JMS clients to post messages even if rest of domain is unavailable. JMS servers are only used to host application specific JMS destinations, SOA/OSB JMS destinations remain in relevant SOA/OSB managed servers. Separate Coherence servers allow OSB cache to be offloaded from OSB servers. Use of Coherence by other components as a shared infrastructure data grid service. Coherence cluster may be managed by WLS but more likely run as a standalone Coherence cluster. Benefits Single Administration Point (1 Admin Server) Closely follows EDG with addition of application specific JMS servers and standalone Coherence servers for OSB caching and application specific caches. Coherence grid can be scaled independent of OSB/SOA. JMS queues provide for inter-application communication. Drawbacks Patching is an all or nothing affair. Startup time for SOA may be slow if large number of composites deployed. Multiple Domains This extends the EDG into multiple domains, allowing separate management and update of these domains.  I see this type of configuration quite often with customers, although some don't have OWSM, others don't have separate Coherence etc. SOA & BAM are kept in the same domain as little benefit is obtained by separating them. Key Points Separate JMS allows those servers to be kept up separately from rest of SOA Domain, allowing JMS clients to post messages even if other domains are unavailable. JMS servers are only used to host application specific JMS destinations, SOA/OSB JMS destinations remain in relevant SOA/OSB managed servers. Separate Coherence servers allow OSB cache to be offloaded from OSB servers. Use of Coherence by other components as a shared infrastructure data grid service. Coherence cluster may be managed by WLS but more likely run as a standalone Coherence cluster. Benefits Follows EDG but in separate domains and with addition of application specific JMS servers and standalone Coherence servers for OSB caching and application specific caches. Coherence grid can be scaled independent of OSB/SOA. JMS queues provide for inter-application communication. Patch lifecycle of OSB/SOA/JMS are no longer lock stepped. JMS may be kept running independently of other domains allowing applications to insert messages fro later consumption by SOA/OSB. OSB may be kept running independent of other domains, allowing service virtualization to continue independent of other domains availability. All domains use same OWSM policy store (MDS-WSM). Drawbacks Multiple domains to manage and configure. Multiple Admin servers (single view requires use of Grid Control) Multiple Admin servers/WSM clusters waste resources. Additional homes needed to enjoy benefits of separate patching. Cross domain trust needs setting up to simplify cross domain interactions. Startup time for SOA may be slow if large number of composites deployed. Shared Service Environment This model extends the previous multiple domain arrangement to provide a true shared service environment.This extends the previous model by allowing multiple additional SOA domains and/or other domains to take advantage of the shared services.  Only one non-shared domain is shown, but there could be multiple, allowing groups of applications to share patching independent of other application groups. Key Points Separate JMS allows those servers to be kept up separately from rest of SOA Domain, allowing JMS clients to post messages even if other domains are unavailable. JMS servers are only used to host application specific JMS destinations, SOA/OSB JMS destinations remain in relevant SOA/OSB managed servers. Separate Coherence servers allow OSB cache to be offloaded from OSB servers. Use of Coherence by other components as a shared infrastructure data grid service Coherence cluster may be managed by WLS but more likely run as a standalone Coherence cluster. Shared SOA Domain hosts Human Workflow Tasks BAM Common "utility" composites Single OSB domain provides "Enterprise Service Bus" All domains use same OWSM policy store (MDS-WSM) Benefits Follows EDG but in separate domains and with addition of application specific JMS servers and standalone Coherence servers for OSB caching and application specific caches. Coherence grid can be scaled independent of OSB/SOA. JMS queues provide for inter-application communication. Patch lifecycle of OSB/SOA/JMS are no longer lock stepped. JMS may be kept running independently of other domains allowing applications to insert messages fro later consumption by SOA/OSB. OSB may be kept running independent of other domains, allowing service virtualization to continue independent of other domains availability. All domains use same OWSM policy store (MDS-WSM). Supports large numbers of deployed composites in multiple domains. Single URL for Human Workflow end users. Single URL for BAM end users. Drawbacks Multiple domains to manage and configure. Multiple Admin servers (single view requires use of Grid Control) Multiple Admin servers/WSM clusters waste resources. Additional homes needed to enjoy benefits of separate patching. Cross domain trust needs setting up to simplify cross domain interactions. Human Workflow needs to be specially configured to point to shared services domain. Summary The alternatives in this blog allow for patching to have different impacts, depending on the model chosen.  Each organization must decide the tradeoffs for itself.  One extreme is to go for the shared services model and have one domain per SOA application.  This requires a lot of administration of the multiple domains.  The other extreme is to have a single super domain.  This makes the entire enterprise susceptible to an outage at the same time due to patching or other domain level changes.  Hopefully this blog will help your organization choose the right model for you.

    Read the article

  • Uploadify works for Visual Studio but not for IIS 7(same machines), using Forms authentication. Doe

    - by Marc
    I'm using the Uploadify jQuery control for client-side uploads. I think my IIS 7 configuration has issues with it. The uploadify POST immediately returns a HTTP 1.1 302 Found, back to my login page. I've tried to allow anonymous access to the uploading section(subfolder) plus the page(script) that processes the image in the web.config, using the location node(configuration ... location). Seems like the Uploadify post is immediately blocked. Again, this worked fine just using Visual Studio 2008, but when I run the site on the same machine I get the redirect. Your thoughts/ideas are very welcomed!

    Read the article

  • Can a CLSID be different for the same program installed on two different machines?

    - by uberjumper
    I am using comtypes to generate wrappers for a certain com library. I am having certain issues with a few things, that are not being generated properly. I can get around this by doing the missing work, manually. However can i depend on the fact that CLSID's will not change? Lets say: I install a program with the com library Foo 1.0, now i install the exact same version of that program on another PC, will the CLSID's of the interfaces change? This might be a terribly dumb question.

    Read the article

  • Can we sniff packets between 2 machines in a network from a third machine using wireshark or etherea

    - by coolcake
    I have a small network in which there are 2 electronic devices and one desktop connected using a switch. Through the desktop with Ethereal/wireshark installed on it, can I sniff the packets that are being communicated between the 2 electronic devices? I cannot install ethereal or wireshark on either of the electronic devices, but need to monitor the traffic between the 2 devices from my desktop, which is connected via the same switch.

    Read the article

  • .NET MissingMethodException occuring on one of thousands of end-user machines -- any insight?

    - by Yoooder
    This issue has me baffled, it's affecting a single user (to my knowledge) and hasn't been reproduced by us... The user is receiving a MissingMethodException, our trace file indicates it's occuring after we create a new instance of a component, when we're calling an Initialize/Setup method in preparation to have it do work (InitializeWorkerByArgument in the example) The Method specified by the error is an interface method, which a base class implements and classes derived from the base class can override as-needed The user has the latest release of our application All the provided code is shipped within a single assembly Here's a very distilled version of the component: class Widget : UserControl { public void DoSomething(string argument) { InitializeWorkerByArgument(argument); this.Worker.DoWork(); } private void InitializeWorkerByArgument(string argument) { switch (argument) { case "SomeArgument": this.Worker = new SomeWidgetWorker(); break; } // The issue I'm tracking down would have occured during "new SomeWidgetWorker()" // and would have resulted in a missing method exception stating that // method "DoWork" could not be found. this.Worker.DoWorkComplete += new EventHandler(Worker_DoWorkComplete); } private IWidgetWorker Worker { get; set; } void Worker_DoWorkComplete(object sender, EventArgs e) { MessageBox.Show("All done"); } } interface IWidgetWorker { void DoWork(); event EventHandler DoWorkComplete; } abstract class BaseWorker : IWidgetWorker { virtual public void DoWork() { System.Threading.Thread.Sleep(1000); RaiseDoWorkComplete(this, null); } internal void RaiseDoWorkComplete(object sender, EventArgs e) { if (DoWorkComplete != null) { DoWorkComplete(this, null); } } public event EventHandler DoWorkComplete; } class SomeWidgetWorker : BaseWorker { public override void DoWork() { System.Threading.Thread.Sleep(2000); RaiseDoWorkComplete(this, null); } }

    Read the article

  • Where can I find the transaction protocol used by Automated Teller Machines?

    - by Dave
    I'm doing a grad-school software engineering project and I'm looking for the protocol that governs communications between ATMs and bank networks. I've been googling for quite a while now, and though I'm finding all sorts of interesting information about ATMs, I'm surprised to find that there seems to be no industry standard for high-level communications. I'm not talking about 3DES or low-level transmission protocols, but something along the lines of an Interface Control Document; something that governs the sequence of events for various transactions: verify credentials, withdrawal, check balance, etc. Any ideas? Does anything like this even exist? I can't believe that after all this time the banks and ATM manufacturers are still just making this up as they go. A shorter question: if I wanted to go into the ATM software manufacturing business, where would I start looking for standards?

    Read the article

  • Can I join between two MySQL tables stores on separate machines?

    - by CuriousCoder
    I have a relatively light query that needs information from a local MySQL table along with another MySQL table which is stored on a physically separate machine (on the same network). I'm keen to avoid setting up replication just to facilitate this light query that only needs executed once a day. Is there any way that I can join with a table on a remote machine using one query? Or run a SELECT INTO into a local table. Notes I'm using C# & .NET 4.

    Read the article

  • Logging Application Block doesn't add log entries to Event Viewer on machines other than that on whi

    - by Neo
    I am using the Logging Application Block (of Microsoft Enterprise Library 5.0) to log exceptions in the Event Viewer that occur in my WPF XBAP application. However, exceptions are only being logged if the application is run on my machine (the machine it was built on). Any other machine it doesn't log anything. I've tried to find a reason why this might be occurring - I've tried setting requirePermission to false - but to no avail. Anyone any ideas on why this might be happening?

    Read the article

  • Should the hostname of my VPS point to the dedi IP of my Domain or to to a shared one used for new account creation?

    - by thomas
    I leased a VPS which I want to use to sell shared hosting. 3 IPs - I call them A, B and C here for simplicity. Actual setup is: A=NS1.mydomain.com; host.mydomain.com and is used to set-up new accounts in shared environment B=NS2.mydomain.com C=dedicated IP for mydomain.com (SSL secured) The more I read about DNS, the more I get confused; thus my question: Is this configuration "Good Practice", especially the hostname pointing to A rather than to C? And what would be a better alternative?

    Read the article

  • libclntsh.so.11.1: cannot open shared object file.

    - by zhangzhong
    I want to schedule a task on linux by icrontab, and the task is written in python and have to import cx_Oracle module, so I export ORACLE_HOME and LD_LIBRARY_PATH in .bash_profile, but it raise the error: libclntsh.so.11.1: cannot open shared object file. Since it is ok to run the task by issue the command in shell like python a.py # ok I change the task in icrontab into a shell script which invoke my python script, but the exception recurred? # the shell script scheduled in icrontab #! bash python a.py Could you help how to do with it?

    Read the article

  • Custom ASP.NET MVC cache controllers in a shared hosting environment?

    - by Daniel Crenna
    I'm using custom controllers that cache static resources (CSS, JS, etc.) and images. I'm currently working with a hosting provider that has set me up under a full trust profile. Despite being in full trust, my controllers fail because the caching strategy relies on the File class to directly open a resource file prior to treatment and storage in memory. Is this something that would likely occur in all full trust shared hosting environments or is this specific to my host? The static files live within my application's structure and not in an arbitrary server path. It seems to me that custom caching would require code to access the file directly, and am hoping someone else has dealt with this issue.

    Read the article

  • Handling changes in an interface shared across multiple solutions?

    - by Anthony Mastrean
    Our "main" solution is the development code: shared libraries, services, UI projects, etc. The other solution is an integration and automated tests solution. It references several of the development projects. The reason it is separate is to avoid interference with the development solution's unit test VSMDI file. And to allow us to play with different execution methods (other test runners, like Gallio or StoryTeller) without interfering with the development solution. Recently, an interface changed in the development solution, one of our test mocks implemented that interface. But, it was not updated because there was no warning at compile time because it was in another solution. This broke our CI build. Does anyone have a similar setup? How do you handle these issues, do you follow a strict procedure or is there some kind of technical answer?

    Read the article

  • Gluster strange issue with shared mount point like seprate mount.

    - by Satish
    I have two nodes and for experiment i have install glusterfs and create volume and successfully mounted on own node, but if i create file in node1 it is not showing in node2, look like both behaving like they are separate. node1 10.101.140.10:/nova-gluster-vol 2.0G 820M 1.2G 41% /mnt node2 10.101.140.10:/nova-gluster-vol 2.0G 33M 2.0G 2% /mnt volume info split brian $ sudo gluster volume heal nova-gluster-vol info split-brain Gathering Heal info on volume nova-gluster-vol has been successful Brick 10.101.140.10:/brick1/sdb Number of entries: 0 Brick 10.101.140.20:/brick1/sdb Number of entries: 0 test node1 $ echo "TEST" > /mnt/node1 $ ls -l /mnt/node1 -rw-r--r-- 1 root root 5 Oct 27 17:47 /mnt/node1 node2 (file isn't there, while they are shared mount) $ ls -l /mnt/node1 ls: cannot access /mnt/node1: No such file or directory What i am missing??

    Read the article

< Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >