Search Results

Search found 48592 results on 1944 pages for 'cannot start'.

Page 352/1944 | < Previous Page | 348 349 350 351 352 353 354 355 356 357 358 359  | Next Page >

  • SQL – Migrate Database from SQL Server to NuoDB – A Quick Tutorial

    - by Pinal Dave
    Data is growing exponentially and every organization with growing data is thinking of next big innovation in the world of Big Data. Big data is a indeed a future for every organization at one point of the time. Just like every other next big thing, big data has its own challenges and issues. The biggest challenge associated with the big data is to find the ideal platform which supports the scalability and growth of the data. If you are a regular reader of this blog, you must be familiar with NuoDB. I have been working with NuoDB for a while and their recent release is the best thus far. NuoDB is an elastically scalable SQL database that can run on local host, datacenter and cloud-based resources. A key feature of the product is that it does not require sharding (read more here). Last week, I was able to install NuoDB in less than 90 seconds and have explored their Explorer and Admin sections. You can read about my experiences in these posts: SQL – Step by Step Guide to Download and Install NuoDB – Getting Started with NuoDB SQL – Quick Start with Admin Sections of NuoDB – Manage NuoDB Database SQL – Quick Start with Explorer Sections of NuoDB – Query NuoDB Database Many SQL Authority readers have been following me in my journey to evaluate NuoDB. One of the frequently asked questions I’ve received from you is if there is any way to migrate data from SQL Server to NuoDB. The fact is that there is indeed a way to do so and NuoDB provides a fantastic tool which can help users to do it. NuoDB Migrator is a command line utility that supports the migration of Microsoft SQL Server, MySQL, Oracle, and PostgreSQL schemas and data to NuoDB. The migration to NuoDB is a three-step process: NuoDB Migrator generates a schema for a target NuoDB database It loads data into the target NuoDB database It dumps data from the source database Let’s see how we can migrate our data from SQL Server to NuoDB using a simple three-step approach. But before we do that we will create a sample database in MSSQL and later we will migrate the same database to NuoDB: Setup Step 1: Build a sample data CREATE DATABASE [Test]; CREATE TABLE [Department]( [DepartmentID] [smallint] NOT NULL, [Name] VARCHAR(100) NOT NULL, [GroupName] VARCHAR(100) NOT NULL, [ModifiedDate] [datetime] NOT NULL, CONSTRAINT [PK_Department_DepartmentID] PRIMARY KEY CLUSTERED ( [DepartmentID] ASC ) ) ON [PRIMARY]; INSERT INTO Department SELECT * FROM AdventureWorks2012.HumanResources.Department; Note that I am using the SQL Server AdventureWorks database to build this sample table but you can build this sample table any way you prefer. Setup Step 2: Install Java 64 bit Before you can begin the migration process to NuoDB, make sure you have 64-bit Java installed on your computer. This is due to the fact that the NuoDB Migrator tool is built in Java. You can download 64-bit Java for Windows, Mac OSX, or Linux from the following link: http://java.com/en/download/manual.jsp. One more thing to remember is that you make sure that the path in your environment settings is set to your JAVA_HOME directory or else the tool will not work. Here is how you can do it: Go to My Computer >> Right Click >> Select Properties >> Click on Advanced System Settings >> Click on Environment Variables >> Click on New and enter the following values. Variable Name: JAVA_HOME Variable Value: C:\Program Files\Java\jre7 Make sure you enter your Java installation directory in the Variable Value field. Setup Step 3: Install JDBC driver for SQL Server. There are two JDBC drivers available for SQL Server.  Select the one you prefer to use by following one of the two links below: Microsoft JDBC Driver jTDS JDBC Driver In this example we will be using jTDS JDBC driver. Once you download the driver, move the driver to your NuoDB installation folder. In my case, I have moved the JAR file of the driver into the C:\Program Files\NuoDB\tools\migrator\jar folder as this is my NuoDB installation directory. Now we are all set to start the three-step migration process from SQL Server to NuoDB: Migration Step 1: NuoDB Schema Generation Here is the command I use to generate a schema of my SQL Server Database in NuoDB. First I go to the folder C:\Program Files\NuoDB\tools\migrator\bin and execute the nuodb-migrator.bat file. Note that my database name is ‘test’. Additionally my username and password is also ‘test’. You can see that my SQL Server database is running on my localhost on port 1433. Additionally, the schema of the table is ‘dbo’. nuodb-migrator schema –source.driver=net.sourceforge.jtds.jdbc.Driver –source.url=jdbc:jtds:sqlserver://localhost:1433/ –source.username=test –source.password=test –source.catalog=test –source.schema=dbo –output.path=/tmp/schema.sql The above script will generate a schema of all my SQL Server tables and will put it in the folder C:\tmp\schema.sql . You can open the schema.sql file and execute this file directly in your NuoDB instance. You can follow the link here to see how you can execute the SQL script in NuoDB. Please note that if you have not yet created the schema in the NuoDB database, you should create it before executing this step. Step 2: Generate the Dump File of the Data Once you have recreated your schema in NuoDB from SQL Server, the next step is very easy. Here we create a CSV format dump file, which will contain all the data from all the tables from the SQL Server database. The command to do so is very similar to the above command. Be aware that this step may take a bit of time based on your database size. nuodb-migrator dump –source.driver=net.sourceforge.jtds.jdbc.Driver –source.url=jdbc:jtds:sqlserver://localhost:1433/ –source.username=test –source.password=test –source.catalog=test –source.schema=dbo –output.type=csv –output.path=/tmp/dump.cat Once the above command is successfully executed you can find your CSV file in the C:\tmp\ folder. However, you do not have to do anything manually. The third and final step will take care of completing the migration process. Migration Step 3: Load the Data into NuoDB After building schema and taking a dump of the data, the very next step is essential and crucial. It will take the CSV file and load it into the NuoDB database. nuodb-migrator load –target.url=jdbc:com.nuodb://localhost:48004/mytest –target.schema=dbo –target.username=test –target.password=test –input.path=/tmp/dump.cat Please note that in the above script we are now targeting the NuoDB database, which we have already created with the name of “MyTest”. If the database does not exist, create it manually before executing the above script. I have kept the username and password as “test”, but please make sure that you create a more secure password for your database for security reasons. Voila!  You’re Done That’s it. You are done. It took 3 setup and 3 migration steps to migrate your SQL Server database to NuoDB.  You can now start exploring the database and build excellent, scale-out applications. In this blog post, I have done my best to come up with simple and easy process, which you can follow to migrate your app from SQL Server to NuoDB. Download NuoDB I strongly encourage you to download NuoDB and go through my 3-step migration tutorial from SQL Server to NuoDB. Additionally here are two very important blog post from NuoDB CTO Seth Proctor. He has written excellent blog posts on the concept of the Administrative Domains. NuoDB has this concept of an Administrative Domain, which is a collection of hosts that can run one or multiple databases.  Each database has its own TEs and SMs, but all are managed within the Admin Console for that particular domain. http://www.nuodb.com/techblog/2013/03/11/getting-started-provisioning-a-domain/ http://www.nuodb.com/techblog/2013/03/14/getting-started-running-a-database/ Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: NuoDB

    Read the article

  • Windows Azure VMs - New "Stopped" VM Options Provide Cost-effective Flexibility for On-Demand Workloads

    - by KeithMayer
    Originally posted on: http://geekswithblogs.net/KeithMayer/archive/2013/06/22/windows-azure-vms---new-stopped-vm-options-provide-cost-effective.aspxDidn’t make it to TechEd this year? Don’t worry!  This month, we’ll be releasing a new article series that highlights the Best of TechEd announcements and technical information for IT Pros.  Today’s article focuses on a new, much-heralded enhancement to Windows Azure Infrastructure Services to make it more cost-effective for spinning VMs up and down on-demand on the Windows Azure cloud platform. NEW! VMs that are shutdown from the Windows Azure Management Portal will no longer continue to accumulate compute charges while stopped! Previous to this enhancement being available, the Azure platform maintained fabric resource reservations for VMs, even in a shutdown state, to ensure consistent resource availability when starting those VMs in the future.  And, this meant that VMs had to be exported and completely deprovisioned when not in use to avoid compute charges. In this article, I'll provide more details on the scenarios that this enhancement best fits, and I'll also review the new options and considerations that we now have for performing safe shutdowns of Windows Azure VMs. Which scenarios does the new enhancement best fit? Being able to easily shutdown VMs from the Windows Azure Management Portal without continued compute charges is a great enhancement for certain cloud use cases, such as: On-demand dev/test/lab environments - Freely start and stop lab VMs so that they are only accumulating compute charges when being actively used.  "Bursting" load-balanced web applications - Provision a number of load-balanced VMs, but keep the minimum number of VMs running to support "normal" loads. Easily start-up the remaining VMs only when needed to support peak loads. Disaster Recovery - Start-up "cold" VMs when needed to recover from disaster scenarios. BUT ... there is a consideration to keep in mind when using the Windows Azure Management Portal to shutdown VMs: although performing a VM shutdown via the Windows Azure Management Portal causes that VM to no longer accumulate compute charges, it also deallocates the VM from fabric resources to which it was previously assigned.  These fabric resources include compute resources such as virtual CPU cores and memory, as well as network resources, such as IP addresses.  This means that when the VM is later started after being shutdown from the portal, the VM could be assigned a different IP address or placed on a different compute node within the fabric. In some cases, you may want to shutdown VMs using the old approach, where fabric resource assignments are maintained while the VM is in a shutdown state.  Specifically, you may wish to do this when temporarily shutting down or restarting a "7x24" VM as part of a maintenance activity.  Good news - you can still revert back to the old VM shutdown behavior when necessary by using the alternate VM shutdown approaches listed below.  Let's walk through each approach for performing a VM Shutdown action on Windows Azure so that we can understand the benefits and considerations of each... How many ways can I shutdown a VM? In Windows Azure Infrastructure Services, there's three general ways that can be used to safely shutdown VMs: Shutdown VM via Windows Azure Management Portal Shutdown Guest Operating System inside the VM Stop VM via Windows PowerShell using Windows Azure PowerShell Module Although each of these options performs a safe shutdown of the guest operation system and the VM itself, each option handles the VM shutdown end state differently. Shutdown VM via Windows Azure Management Portal When clicking the Shutdown button at the bottom of the Virtual Machines page in the Windows Azure Management Portal, the VM is safely shutdown and "deallocated" from fabric resources.  Shutdown button on Virtual Machines page in Windows Azure Management Portal  When the shutdown process completes, the VM will be shown on the Virtual Machines page with a "Stopped ( Deallocated )" status as shown in the figure below. Virtual Machine in a "Stopped (Deallocated)" Status "Deallocated" means that the VM configuration is no longer being actively associated with fabric resources, such as virtual CPUs, memory and networks. In this state, the VM will not continue to allocate compute charges, but since fabric resources are deallocated, the VM could receive a different internal IP address ( called "Dynamic IPs" or "DIPs" in Windows Azure ) the next time it is started.  TIP: If you are leveraging this shutdown option and consistency of DIPs is important to applications running inside your VMs, you should consider using virtual networks with your VMs.  Virtual networks permit you to assign a specific IP Address Space for use with VMs that are assigned to that virtual network.  As long as you start VMs in the same order in which they were originally provisioned, each VM should be reassigned the same DIP that it was previously using. What about consistency of External IP Addresses? Great question! External IP addresses ( called "Virtual IPs" or "VIPs" in Windows Azure ) are associated with the cloud service in which one or more Windows Azure VMs are running.  As long as at least 1 VM inside a cloud service remains in a "Running" state, the VIP assigned to a cloud service will be preserved.  If all VMs inside a cloud service are in a "Stopped ( Deallocated )" status, then the cloud service may receive a different VIP when VMs are next restarted. TIP: If consistency of VIPs is important for the cloud services in which you are running VMs, consider keeping one VM inside each cloud service in the alternate VM shutdown state listed below to preserve the VIP associated with the cloud service. Shutdown Guest Operating System inside the VM When performing a Guest OS shutdown or restart ( ie., a shutdown or restart operation initiated from the Guest OS running inside the VM ), the VM configuration will not be deallocated from fabric resources. In the figure below, the VM has been shutdown from within the Guest OS and is shown with a "Stopped" VM status rather than the "Stopped ( Deallocated )" VM status that was shown in the previous figure. Note that it may require a few minutes for the Windows Azure Management Portal to reflect that the VM is in a "Stopped" state in this scenario, because we are performing an OS shutdown inside the VM rather than through an Azure management endpoint. Virtual Machine in a "Stopped" Status VMs shown in a "Stopped" status will continue to accumulate compute charges, because fabric resources are still being reserved for these VMs.  However, this also means that DIPs and VIPs are preserved for VMs in this state, so you don't have to worry about VMs and cloud services getting different IP addresses when they are started in the future. Stop VM via Windows PowerShell In the latest version of the Windows Azure PowerShell Module, a new -StayProvisioned parameter has been added to the Stop-AzureVM cmdlet. This new parameter provides the flexibility to choose the VM configuration end result when stopping VMs using PowerShell: When running the Stop-AzureVM cmdlet without the -StayProvisioned parameter specified, the VM will be safely stopped and deallocated; that is, the VM will be left in a "Stopped ( Deallocated )" status just like the end result when a VM Shutdown operation is performed via the Windows Azure Management Portal.  When running the Stop-AzureVM cmdlet with the -StayProvisioned parameter specified, the VM will be safely stopped but fabric resource reservations will be preserved; that is the VM will be left in a "Stopped" status just like the end result when performing a Guest OS shutdown operation. So, with PowerShell, you can choose how Windows Azure should handle VM configuration and fabric resource reservations when stopping VMs on a case-by-case basis. TIP: It's important to note that the -StayProvisioned parameter is only available in the latest version of the Windows Azure PowerShell Module.  So, if you've previously downloaded this module, be sure to download and install the latest version to get this new functionality. Want to Learn More about Windows Azure Infrastructure Services? To learn more about Windows Azure Infrastructure Services, be sure to check-out these additional FREE resources: Become our next "Early Expert"! Complete the Early Experts "Cloud Quest" and build a multi-VM lab network in the cloud for FREE!  Build some cool scenarios! Check out our list of over 20+ Step-by-Step Lab Guides based on key scenarios that IT Pros are implementing on Windows Azure Infrastructure Services TODAY!  Looking forward to seeing you in the Cloud! - Keith Build Your Lab! Download Windows Server 2012 Don’t Have a Lab? Build Your Lab in the Cloud with Windows Azure Virtual Machines Want to Get Certified? Join our Windows Server 2012 "Early Experts" Study Group

    Read the article

  • Will Visual Studio 2010 only run 4.0 unit tests?

    - by Bjorn Bailleul
    I have different projects written in .NET 3.5 and some unit test projects to cover them. When converting my solution to be used in Visual Studio 2010 I keep all my projects in 3.5 but the unit tests are forced to 4.0? This way I cannot use them with my regular projects anymore. Resulting in this: Could not load file or assembly 'xxx.xxx.Core.UnitTest' or one of its dependencies. This assembly is built by a runtime newer than the currently loaded runtime and cannot be loaded. So I can't unit test any project less then 4.0? Or am I doing something wrong here?

    Read the article

  • WSAECONNRESET (10054) error using WebDrive to map to a Subversion/Apache WebDAV share

    - by Dylan Beattie
    Hello, I'm using WebDrive to map a drive letter to a WebDAV share running on Subversion with the SVNAutoversioning flag enabled. The Subversion server is running CollabNet Subversion Edge with LDAP authentication. When trying to connect using WebDrive, I get: Connecting to site myserver Connecting to http://myserver/webdrive/ Resolving url myserver to an IP address Url resolved to IP address 192.168.0.12 Connecting to 192.168.0.12 on port 80 Connected successfully to the server on port 80 Testing directory listing ... Connecting to 192.168.0.12 on port 80 Connected successfully to the server on port 80 Unable to connect to server, error information below Error: Socket receive failure (4507) Operation: Connecting to server Winsock Error: WSAECONNRESET (10054) The httpd.conf file running on the server contains the following section: <Location /webdrive/> DAV svn SVNParentPath "C:\Program Files\Subversion\data\repositories" SVNReposName "My Subversion WebDrive" AuthzSVNAccessFile "C:\Program Files\Subversion\data/conf/svn_access_file" SVNListParentPath On Allow from all AuthType Basic AuthName "My Subversion Repository" AuthBasicProvider csvn-file-users ldap-users Require valid-user ModMimeUsePathInfo on SVNAutoversioning on </Location> and in the Apache error_yyyy_mm_dd.log file on the server, I'm seeing this when I try to connect via WebDAV: [Mon Jan 10 14:53:22 2011] [debug] mod_authnz_ldap.c(379): [client 192.168.0.50] [5572] auth_ldap authenticate: using URL ldap://mydc/dc=mydomain,dc=com?sAMAccountName?sub [Mon Jan 10 14:53:22 2011] [debug] mod_authnz_ldap.c(484): [client 192.168.0.50] [5572] auth_ldap authenticate: accepting dylan.beattie [Mon Jan 10 14:53:22 2011] [info] [client 192.168.0.50] Access granted: 'dylan.beattie' OPTIONS webdrive:/ [Mon Jan 10 14:53:22 2011] [debug] mod_authnz_ldap.c(379): [client 192.168.0.50] [5572] auth_ldap authenticate: using URL ldap://mydc/dc=mydomain,dc=com?sAMAccountName?sub [Mon Jan 10 14:53:22 2011] [debug] mod_authnz_ldap.c(484): [client 192.168.0.50] [5572] auth_ldap authenticate: accepting dylan.beattie [Mon Jan 10 14:53:22 2011] [info] [client 192.168.0.50] Access granted: 'dylan.beattie' PROPFIND webdrive:/ [Mon Jan 10 14:53:25 2011] [notice] Parent: child process exited with status 3221225477 -- Restarting. [Mon Jan 10 14:53:25 2011] [debug] util_ldap.c(1990): LDAP merging Shared Cache conf: shm=0xcd0f18 rmm=0xcd0f48 for VHOST: myserver.mydomain.com [Mon Jan 10 14:53:25 2011] [debug] util_ldap.c(1990): LDAP merging Shared Cache conf: shm=0xcd0f18 rmm=0xcd0f48 for VHOST: myserver.mydomain.com [Mon Jan 10 14:53:25 2011] [info] APR LDAP: Built with Microsoft Corporation. LDAP SDK [Mon Jan 10 14:53:25 2011] [info] LDAP: SSL support unavailable: LDAP: CA certificates cannot be set using this method, as they are stored in the registry instead. [Mon Jan 10 14:53:25 2011] [notice] Apache/2.2.16 (Win32) DAV/2 SVN/1.6.13 configured -- resuming normal operations [Mon Jan 10 14:53:25 2011] [notice] Server built: Oct 4 2010 19:55:36 [Mon Jan 10 14:53:25 2011] [notice] Parent: Created child process 4368 [Mon Jan 10 14:53:25 2011] [debug] mpm_winnt.c(487): Parent: Sent the scoreboard to the child [Mon Jan 10 14:53:25 2011] [debug] util_ldap.c(1990): LDAP merging Shared Cache conf: shm=0xca2bb0 rmm=0xca2be0 for VHOST: myserver.mydomain.com [Mon Jan 10 14:53:25 2011] [debug] util_ldap.c(1990): LDAP merging Shared Cache conf: shm=0xca2bb0 rmm=0xca2be0 for VHOST: myserver.mydomain.com [Mon Jan 10 14:53:25 2011] [info] APR LDAP: Built with Microsoft Corporation. LDAP SDK [Mon Jan 10 14:53:25 2011] [info] LDAP: SSL support unavailable: LDAP: CA certificates cannot be set using this method, as they are stored in the registry instead. [Mon Jan 10 14:53:25 2011] [error] python_init: Python version mismatch, expected '2.5', found '2.5.4'. [Mon Jan 10 14:53:25 2011] [error] python_init: Python executable found 'C:\\Program Files\\Subversion\\bin\\httpd.exe'. [Mon Jan 10 14:53:25 2011] [error] python_init: Python path being used 'C:\\Program Files\\Subversion\\Python25\\python25.zip;C:\\Program Files\\Subversion\\Python25\\\\DLLs;C:\\Program Files\\Subversion\\Python25\\\\lib;C:\\Program Files\\Subversion\\Python25\\\\lib\\plat-win;C:\\Program Files\\Subversion\\Python25\\\\lib\\lib-tk;C:\\Program Files\\Subversion\\bin'. [Mon Jan 10 14:53:25 2011] [notice] mod_python: Creating 8 session mutexes based on 0 max processes and 64 max threads. [Mon Jan 10 14:53:25 2011] [notice] Child 4368: Child process is running [Mon Jan 10 14:53:25 2011] [debug] mpm_winnt.c(408): Child 4368: Retrieved our scoreboard from the parent. [Mon Jan 10 14:53:25 2011] [info] Parent: Duplicating socket 288 and sending it to child process 4368 [Mon Jan 10 14:53:25 2011] [info] Parent: Duplicating socket 276 and sending it to child process 4368 [Mon Jan 10 14:53:25 2011] [debug] mpm_winnt.c(564): Child 4368: retrieved 2 listeners from parent [Mon Jan 10 14:53:25 2011] [notice] Child 4368: Acquired the start mutex. [Mon Jan 10 14:53:25 2011] [notice] Child 4368: Starting 64 worker threads. [Mon Jan 10 14:53:25 2011] [debug] mpm_winnt.c(605): Parent: Sent 2 listeners to child 4368 [Mon Jan 10 14:53:25 2011] [notice] Child 4368: Starting thread to listen on port 49159. [Mon Jan 10 14:53:25 2011] [notice] Child 4368: Starting thread to listen on port 80. [Mon Jan 10 14:53:25 2011] [debug] mod_authnz_ldap.c(379): [client 192.168.0.50] [4368] auth_ldap authenticate: using URL ldap://mydc/dc=mydomain,dc=com?sAMAccountName?sub [Mon Jan 10 14:53:25 2011] [debug] mod_authnz_ldap.c(484): [client 192.168.0.50] [4368] auth_ldap authenticate: accepting dylan.beattie [Mon Jan 10 14:53:25 2011] [info] [client 192.168.0.50] Access granted: 'dylan.beattie' PROPFIND webdrive:/ [Mon Jan 10 14:53:28 2011] [notice] Parent: child process exited with status 3221225477 -- Restarting. [Mon Jan 10 14:53:28 2011] [debug] util_ldap.c(1990): LDAP merging Shared Cache conf: shm=0xcd4f90 rmm=0xcd4fc0 for VHOST: myserver.mydomain.com [Mon Jan 10 14:53:28 2011] [debug] util_ldap.c(1990): LDAP merging Shared Cache conf: shm=0xcd4f90 rmm=0xcd4fc0 for VHOST: myserver.mydomain.com [Mon Jan 10 14:53:28 2011] [info] APR LDAP: Built with Microsoft Corporation. LDAP SDK [Mon Jan 10 14:53:28 2011] [info] LDAP: SSL support unavailable: LDAP: CA certificates cannot be set using this method, as they are stored in the registry instead. [Mon Jan 10 14:53:28 2011] [notice] Apache/2.2.16 (Win32) DAV/2 SVN/1.6.13 configured -- resuming normal operations [Mon Jan 10 14:53:28 2011] [notice] Server built: Oct 4 2010 19:55:36 [Mon Jan 10 14:53:28 2011] [notice] Parent: Created child process 5440 [Mon Jan 10 14:53:28 2011] [debug] mpm_winnt.c(487): Parent: Sent the scoreboard to the child [Mon Jan 10 14:53:28 2011] [debug] util_ldap.c(1990): LDAP merging Shared Cache conf: shm=0xda2bb0 rmm=0xda2be0 for VHOST: myserver.mydomain.com [Mon Jan 10 14:53:28 2011] [debug] util_ldap.c(1990): LDAP merging Shared Cache conf: shm=0xda2bb0 rmm=0xda2be0 for VHOST: myserver.mydomain.com [Mon Jan 10 14:53:28 2011] [info] APR LDAP: Built with Microsoft Corporation. LDAP SDK [Mon Jan 10 14:53:28 2011] [info] LDAP: SSL support unavailable: LDAP: CA certificates cannot be set using this method, as they are stored in the registry instead. [Mon Jan 10 14:53:28 2011] [error] python_init: Python version mismatch, expected '2.5', found '2.5.4'. [Mon Jan 10 14:53:28 2011] [error] python_init: Python executable found 'C:\\Program Files\\Subversion\\bin\\httpd.exe'. [Mon Jan 10 14:53:28 2011] [error] python_init: Python path being used 'C:\\Program Files\\Subversion\\Python25\\python25.zip;C:\\Program Files\\Subversion\\Python25\\\\DLLs;C:\\Program Files\\Subversion\\Python25\\\\lib;C:\\Program Files\\Subversion\\Python25\\\\lib\\plat-win;C:\\Program Files\\Subversion\\Python25\\\\lib\\lib-tk;C:\\Program Files\\Subversion\\bin'. [Mon Jan 10 14:53:28 2011] [notice] mod_python: Creating 8 session mutexes based on 0 max processes and 64 max threads. [Mon Jan 10 14:53:28 2011] [notice] Child 5440: Child process is running [Mon Jan 10 14:53:28 2011] [debug] mpm_winnt.c(408): Child 5440: Retrieved our scoreboard from the parent. [Mon Jan 10 14:53:28 2011] [info] Parent: Duplicating socket 288 and sending it to child process 5440 [Mon Jan 10 14:53:28 2011] [info] Parent: Duplicating socket 276 and sending it to child process 5440 [Mon Jan 10 14:53:28 2011] [debug] mpm_winnt.c(564): Child 5440: retrieved 2 listeners from parent [Mon Jan 10 14:53:28 2011] [notice] Child 5440: Acquired the start mutex. [Mon Jan 10 14:53:28 2011] [notice] Child 5440: Starting 64 worker threads. [Mon Jan 10 14:53:28 2011] [debug] mpm_winnt.c(605): Parent: Sent 2 listeners to child 5440 [Mon Jan 10 14:53:28 2011] [notice] Child 5440: Starting thread to listen on port 49159. [Mon Jan 10 14:53:28 2011] [notice] Child 5440: Starting thread to listen on port 80. Browsing http://myserver/webdrive/ from a web browser is working fine, and I have a similar set-up working perfectly on a different SVN server that isn't running Collabnet but has had Subversion and Apache installed and configured separately. Any ideas? The python version error might be red herring - I've seen it in a couple of places in the log files and in other scenarios it doesn't appear to be breaking anything...

    Read the article

  • How to configure MessageEndpointMapping by namespace in NServiceBus

    - by SteveBering
    I am trying to configure my message endpoint mapping in my NServiceBus configuration by sending messages from different namespaces to different endpoints. As such, I have configured the following in my web.config: However, when my application starts, I receive the following exception: Spring.Objects.PropertyAccessExceptionsException: PropertyAccessExceptionsException (1 errors); nested PropertyAccessExceptions are: [Spring.Core.TypeMismatchException: Cannot convert property value of type [System.Collections.Hashtable] to required type [System.Collections.IDictionary] for property 'MessageOwners'., Inner Exception: System.ArgumentException: Problem loading message assembly: Company.Messages.Payments --- System.IO.FileNotFoundException: Could not load file or assembly 'Company.Messages.Payments' or one of its dependencies. The system cannot find the file specified. File name: 'Company.Messages.Payments' What I find interesting is that it seems to have found Company.Messages.Accounts but failed on the second configured line. I thought that maybe it didn't like have them all go to the same endpoint, but changing this configuration to have them go different endpoints didn't change the error message I received. What am I doing wrong? Is it not possible to segment messages by namespace (all I have seen is by type and by assembly)? Thanks, Steve

    Read the article

  • HTTP Error 500.19 - Internal Server Error for silverlight application

    - by KentZhou
    Use VS2010 silverlight business application template to createa defaut solution. Change authentication to Windows in Web.config and the code in app.xaml.cs to use windows authentication. Nothing else changed. Then run this app from vs2010 built-in web server, it is fine, I can see the login user info from windws(from a AD domain account) display on the top-right of the screen, like DomainName\userName. Then deploy this web app to IIS on windows 7(same computer) and access this app again, I got following error: HTTP Error 500.19 - Internal Server Error The requested page cannot be accessed because the related configuration data for the page is invalid. Detailed Error InformationModule IIS Web Core Notification Unknown Handler Not yet determined Error Code 0x80070005 Config Error Cannot read configuration file due to insufficient permissions Config File \\?\C:\Users\myname\Documents\Visual Studio 2010\Projects\BusinessApplication4\BusinessApplication4.Web\web.config Requested URL http://localhost:77/BusinessApplication4TestPage.html Physical Path Logon Method Not yet determined Logon User Not yet determined Config Source -1: 0: how to resolve this problem?

    Read the article

  • IPC::Open3 Fails Running Under Apache

    - by rjray
    I have a module that uses IPC::Open3 (or IPC::Open2, both exhibit this problem) to call an external binary (bogofilter in this case) and feed it some input via the child-input filehandle, then reads the result from the child-output handle. The code works fine when run in most environments. However, the main use of this module is in a web service that runs under Apache 2.2.6. And under that environment, I get the error: Cannot fdopen STDOUT: Invalid argument This only happens when the code runs under Apache. Previously, the code constructed a horribly complex command, which included a here-document for the input, and ran it with back-ticks. THAT worked, but was very slow and prone to breaking in unique and perplexing ways. I would hate to have to revert to the old version, but I cannot crack this.

    Read the article

  • Build an Organization Chart In Visio 2010

    - by Mysticgeek
    With trying to manage a business these days, it’s very important to have an Organization Chart to keep everything manageable. Here we’ll show you how to build one in Visio 2010. This Guest Article was written by our friends over at Office 2010 Club. Need for Organization Charts The need of creating Organization Charts are becoming indispensable these days, as companies start focusing on extensive hiring for far reach availability, increase in productivity and targeting diverse markets. Considering this rigorous change, creating an organization chart can help stakeholders in comprehending the ever growing organization structure & hierarchy with an ease. It shows the basic structure of organization along with defining the relationships between employees working in different departments. Opportunely, Microsoft Visio 2010 offers an easy way to create Organization chart. As before now, orthodox ways of listing organization hierarchy have been used for defining the structure of departments along with communication possible including; horizontal and vertical communications. To transform these lists which defines organizational structure, into a detailed chart, Visio 2010 includes an add-in for importing Excel spreadsheet, which comes in handy for pulling out data from spreadsheet to create an organization chart. Importantly, you don’t need to indulge yourself in maze of defining organizational hierarchies and chalking-out structure, as you just need to specify the column & row headers, along with data you need to import and it will automatically create out chart defining; organizational hierarchies with specified credentials of each employee, categorized in their corresponding departments. Creating Organization Charts in Visio 2010 To start off with, we have created an Excel spreadsheet having fields, Name, Supervisor, Designation, Department and Phone. The Name field contains name of all the employees working in different departments, whereas Supervisor field contains name of supervisors or team leads. This field is vital for creating Organization Chart, as it defines the basic structure & hierarchy in chart. Now launch Visio 2010, head over to View tab, under Add-Ons menu, from Business options, click Organization Chart Wizard. This will start Organization Chart Wizard, in the first step, enable Information that’s already stored in a file or database option, and click Next. As we are importing Excel sheet, select the second option for importing Excel spreadsheet. Specify the Excel file path and click Next to continue. In this step, you need to specify the fields which actually defines the structure of an organization. In our case, these are Name & Supervisor fields. After specifying fields, click Next to Proceed further. As organization chart is primarily for showing the hierarchy of departments/employees working in organization along with how they are linked together, and who supervises whom. Considering this, in this step we will leave out Supervisor field, because it’s inclusion wouldn’t be necessary as Visio automatically chalks-out the basic structure defined in Excel sheet. Add the rest of the fields under Displayed fields category, and click Next. Now choose the fields which you want to include in Organization Chart’s shapes and click Next. This step is about breaking the chart into multiple pages, if you are dealing with 100+ employees, you may want to specify numbers of pages on which Organization Chart will be displayed. But in our case, we are dealing with much less amount of data, so we will enable I want the wizard to automatically break my organization chart across pages option. Specify the name you need to show on the top of the page. If you are having less than 20 hierarchies, enter the name of the highest ranked employee in organization and click Finish to end the wizard. It will instantly create an Organization chart out of specified Excel spreadsheet. Highest ranked employee will be shown on top of the organization chart, supervising various employees from different departments. As shown below, his immediate subordinates further manages other employees and so on. For advance customizations, head over to Org Chart tab, here you will find different groups for setting up the Org Chart’s hierarchy and manage other employees’ positions. Under Arrange group, shapes’ arrangements can be changed and it provides easy navigation through the chart. You can also change the type of the position and hide subordinates of selected employee. From Picture group, you can insert a picture of the employees, departments, etc. From synchronization group, you have the option of creating a synced copy and expanding subordinates of selected employee. Under Organization Data group, you can change whole layout of Organization chart from Display Options including; shape display, show divider, enable/disable imported fields, change block position, and fill colors, etc. If at any point of time, you need to insert new position or announce vacancy, Organization Chart stencil is always available on the left sidebar. Drag the desired Organization Chart shape into main diagram page, to maintain the structure integrity, i.e, for inserting subordinates for a specific employee, drag the position shape over the existing employee shape box. For instance, We have added a consultant in organization, who is directly under CEO, for maintaining this, we have dragged the Consultant box and just dropped it over the CEO box to make the immediate subordinate position. Adding details to new position is a cinch, just right-click new position box and click Properties. This will open up Shape Data dialog, start filling in all the relevant information and click OK. Here you can see the newly created position is easily populated with all the specified information. Now expanding an Organization Chart doesn’t require maintenance of long lists any more. Under Design tab, you can also try out different designs & layouts over organization chart to make it look more flamboyant and professional.  Conclusion An Organization Chart is a great way of showing detailed organizational hierarchies; with defined credentials of employees, departments structure, new vacancies, newly hired employees, recently added departments, and importantly shows most convenient way of interaction between different departments & employees, etc. Similar Articles Productive Geek Tips Geek Reviews: Using Dia as a Free Replacement for Microsoft VisioMysticgeek Blog: Create Appealing Charts In Excel 2007Create Charts in Excel 2007 the Easy Way with Chart AdvisorCreate a Hyperlink in a Word 2007 Flow Chart and Hide Annoying ScreenTipsCreate A Flow Chart In Word 2007 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Know if Someone Accessed Your Facebook Account Shop for Music with Windows Media Player 12 Access Free Documentaries at BBC Documentaries Rent Cameras In Bulk At CameraRenter Download Songs From MySpace Steve Jobs’ iPhone 4 Keynote Video

    Read the article

  • How to setup a project to use TweetSharp

    - by Stuart Helwig
    I am trying to follow Pete Brown's introductory WPF tutorial which makes use of the TweetSharp libraries to interact with Twitter. I have downloaded what appears to be the latest TweetSharp binaries (and a few others including the ReleaseCandidate) from Codeplex (http://tweetsharp.codeplex.com/). No matter what references I add and no matter what using statements I try, I cannot create a reference to the TwitterService for the FluentTwitter class. I simply get the compiler error - "The type or namespace cannot be found". Now I've noticed that the TweetSharp.dll that Pete references is 518KB but the one contained in each of my different downloads is only 84kb. (I've tried several times - I am getting the full download here). The link from Pete's article to the TweetSharp libraries, no longer works (http://code.google.com/p/tweetsharp/). What basic element am I missing here or what could I be doing wrong?

    Read the article

  • Serializing data using IEnumerable<T> with WebGet

    - by Jim
    possible duplicate: Cannot serialize parameter of type ‘System.Linq.Enumerable… ’ when using WCF, LINQ, JSON Hi, If my method signiature looks like this, it works fine. [WebGet] MyClass[] WebMethod() If the signiature looks like this [WebGet] IEnumerable<T> WebMethod() I get the following error: Cannot serialize parameter of type 'X.Y.Z.T+<WebMethod>d__2c' (for operation 'WebMethod', contract 'IService') because it is not the exact type 'System.Collections.Generic.IEnumerable`1[X.Y.Z.T]' in the method signature and is not in the known types collection. In order to serialize the parameter, add the type to the known types collection for the operation using ServiceKnownTypeAttribute. I have tried adding. ServiceKnownType(typeof(IEnumerable)) Same error. Is this a bug in 2010 beta 2, or is this likely to be correct going forward? Thanks

    Read the article

  • iPad/iPhone HTML5 video loading

    - by mrollinsiv
    I'm trying to load on the fly on the iPad/iPhone and notice that I cannot place a div above this. I put the overlay in the html so that it's generated on page load and not added via javascript and the video when its created is absolutely positioned below this element. This works on a PC, I'm wondering if since it was created via js that the iPhone OS is overriding the z-index and forcing to the top? Also is there a way to override the default "cannot play icon", the one with the slash, and show a loading animation instead? This would solve my issue via another route. My last option would be to loaded all the video tags via js on page load and have them layered on top of each other for the iPad/iPhone? Since the iPhone OS won't load any video until requested would this work? I also am having an issue with the iPhone and showing the "poster" attribute that is set on the video tag.

    Read the article

  • File Access problems with SLES 10 SP2 OES2 SP1

    - by Blackhawk131
    We have identified a couple of repeatable, demonstrable scenarios with unexplained rejected folder access on our servers for Mac users. Hopefully, this can be presented to Novell for a solution. What we did to demonstrate scenario 1; 1. setup a PC and Mac side-by-side 2. login to our server and open up to a central location on both Mac and PC 3. on the PC in that central location create a folder 4. on the Mac in that central location drag the created folder to the Mac desktop, this should work fine, no problem 5. on the PC rename that folder 6. on the Mac drag a file to that renamed folder, this should error with the following message; a. You cannot copy some of these items to the destination because their names are too long for the destination. Do you want to skip copying these items and continue copying the other items? b. Select skip, response is the filename is copied to the location with zero or small byte size. Try opening it and you get file is corrupted error message. What we did to demonstrate scenario 2; 1. setup a PC and Mac side-by-side 2. login to our server and open up to a central location on both Mac and PC 3. on the PC in that central location create a folder then create a subfolder 4. copy some content into the subfolder 5. on the Mac in that central location drag the created top level folder to the Mac desktop, this should work fine, no problem 6. on the PC rename that subfolder 7. on the Mac drag that top level folder to the Mac desktop, this should error on the Mac with the following; a. The operation cannot be completed because you do not have sufficient privileges for b. The operation cannot be completed because you do not have sufficient privileges for 8. on the Mac, if you open that subfolder you can see the file copied in step 4 above but, you can not open that file, you get the following message if you try; a. There was an error opening this document. You do not have permission to open this file. 9. on the PC drag some content into the top level folder 10. on the Mac you can open that file directly from the server or copy it locally, no problem, however-the subfolder is still corrupted or locked, whichever 11. on the PC rename the top level folder 12. on the Mac that same file just opened in step 10 above is now not accessible, get the following message; a. The document could not be opened. I have observed some variances in the above. For instance, a change on the PC side may take a moment before you can observer or act on the Mac side - kind of like the server is slow to respond. Also, the error message may vary. However, the key is once a folder, or subfolder, gets renamed by a PC, Mac problems commence. The solution is to create a new folder from a PC and copy the contents of the corrupted folder to the new folder and not rename the folder name. This has to be done on a PC because the corrupted folder is not accessible by a Mac user. Another problem that dovetails with the above is that we know certain characters are not allowed for PC folder or filenames. If a Mac user creates a folder with a slash in the file name, from the PC the user does not see that slash in the name. As soon as the PC user copies a file to that folder, the Mac user is locked from that folder. Will get the following error message; - Sorry, the operation could not be completed because an unexpected error occurred. - (Error code - 50) In addition to the above mentioned character issue with folders, the problem is more evil with filenames. If, for example, you create a file with a slash in the filename on a Mac and copy it to the server you will get the following error message; - You cannot copy some of these items to the destination because their names are too long for the destination. Do you want to skip copying these items and continue copying the other items? Select either Stop or Skip buttons. It does not matter which button is selected. The file name gets copied to the destination location at a reduced size. Depending on the file type, the icon associated with the file may or may not be present. Furthermore, if you open that file on the server you will get the following message; - Couldnt open the file. It may be corrupt or a file format that doesnt recognize. From the users perspective, if they are not observant of the icon or file size, they may disregard the error message and think their file has copied as intended. Only later do they discover the file is corrupt if they open that file. I want to make a note on this problem. It is the PC causing the issue. You can change folder and file names all day on a MAC and you don't have a problem as long as a character is not the issue. Once you change the file name or folder name from a PC the entire folder structure from that level down is corrupted. But it has to be resolved from a PC by creating a new folder and copying the contents to the new folder like stated above. Is something not configured correctly? SUSE Linux Enterprise Server 10 (x86_64) VERSION = 10 PATCHLEVEL = 2 LSB_VERSION="core-2.0-noarch:core-3.0-noarch:core-2.0-x86_64:core-3.0-x86_64" Novell Open Enterprise Server 2.0.1 (x86_64) VERSION = 2.0.1 PATCHLEVEL = 1 BUILD Note: We use Novell clients on all windows systems to connect to the servers for file access and network storage. We use AFP to allow OSx systems to connect to servers.

    Read the article

  • Event Handlers Not Getting Called? - wxWidgets & C++

    - by Alex
    Hello all, I'm working on a program for my C++ programming class, using wxWidgets. I'm having a huge problem in that my event handlers (I assume) are not getting called, because when I click on the button to trigger the event, nothing happens. My question is: Can you help me find the problem and explain why they would not be getting called? The event handlers OnAbout and OnQuit are working, just not OnCompute or OnClear. I'm really frustrated as I can't figure this out. Thanks a bunch in advance! #include "wx/wx.h" #include "time.h" #include <string> using std::string; // create object of Time class Time first; class App: public wxApp { virtual bool OnInit(); }; class MainPanel : public wxPanel { public: // Constructor for panel class // Constructs my panel class // Params - wxWindow pointer // no return type // pre-conditions: none // post-conditions: none MainPanel(wxWindow* parent); // OnCompute is the event handler for the Compute button // params - none // preconditions - none // postconditions - tasks will have been carried otu successfully // returns void void OnCompute(wxCommandEvent& WXUNUSED(event)); // OnClear is the event handler for the Clear button // params - none // preconditions - none // postconditions - all text areas will be cleared of data // returns void void OnClear(wxCommandEvent& WXUNUSED(event)); // Destructor for panel class // params none // preconditions - none // postconditions - none // no return type ~MainPanel( ); private: wxStaticText *startLabel; wxStaticText *endLabel; wxStaticText *pCLabel; wxStaticText *newEndLabel; wxTextCtrl *start; wxTextCtrl *end; wxTextCtrl *pC; wxTextCtrl *newEnd; wxButton *compute; wxButton *clear; DECLARE_EVENT_TABLE() }; class MainFrame: public wxFrame { private: wxPanel *mainPanel; public: MainFrame(const wxString& title, const wxPoint& pos, const wxSize& size); void OnQuit(wxCommandEvent& event); void OnAbout(wxCommandEvent& event); ~MainFrame(); DECLARE_EVENT_TABLE() }; enum { ID_Quit = 1, ID_About, BUTTON_COMPUTE = 100, BUTTON_CLEAR = 200 }; IMPLEMENT_APP(App) BEGIN_EVENT_TABLE(MainFrame, wxFrame) EVT_MENU(ID_Quit, MainFrame::OnQuit) EVT_MENU(ID_About, MainFrame::OnAbout) END_EVENT_TABLE() BEGIN_EVENT_TABLE(MainPanel, wxPanel) EVT_MENU(BUTTON_COMPUTE, MainPanel::OnCompute) EVT_MENU(BUTTON_CLEAR, MainPanel::OnClear) END_EVENT_TABLE() bool App::OnInit() { MainFrame *frame = new MainFrame( _("Good Guys Delivery Time Calculator"), wxPoint(50, 50), wxSize(450,340) ); frame->Show(true); SetTopWindow(frame); return true; } MainPanel::MainPanel(wxWindow* parent) : wxPanel(parent) { startLabel = new wxStaticText(this, -1, "Start Time:", wxPoint(75, 35)); start = new wxTextCtrl(this, -1, "", wxPoint(135, 35), wxSize(40, 21)); endLabel = new wxStaticText(this, -1, "End Time:", wxPoint(200, 35)); end = new wxTextCtrl(this, -1, "", wxPoint(260, 35), wxSize(40, 21)); pCLabel = new wxStaticText(this, -1, "Percent Change:", wxPoint(170, 85)); pC = new wxTextCtrl(this, -1, "", wxPoint(260, 85), wxSize(40, 21)); newEndLabel = new wxStaticText(this, -1, "New End Time:", wxPoint(180, 130)); newEnd = new wxTextCtrl(this, -1, "", wxPoint(260, 130), wxSize(40, 21)); compute = new wxButton(this, BUTTON_COMPUTE, "Compute", wxPoint(135, 185), wxSize(75, 35)); clear = new wxButton(this, BUTTON_CLEAR, "Clear", wxPoint(230, 185), wxSize(75, 35)); } MainPanel::~MainPanel() {} MainFrame::MainFrame(const wxString& title, const wxPoint& pos, const wxSize& size) : wxFrame( NULL, -1, title, pos, size ) { mainPanel = new MainPanel(this); wxMenu *menuFile = new wxMenu; menuFile->Append( ID_About, _("&About...") ); menuFile->AppendSeparator(); menuFile->Append( ID_Quit, _("E&xit") ); wxMenuBar *menuBar = new wxMenuBar; menuBar->Append( menuFile, _("&File") ); SetMenuBar( menuBar ); CreateStatusBar(); SetStatusText( _("Hi") ); } MainFrame::~MainFrame() {} void MainFrame::OnQuit(wxCommandEvent& WXUNUSED(event)) { Close(TRUE); } void MainFrame::OnAbout(wxCommandEvent& WXUNUSED(event)) { wxMessageBox( _("Alex Olson\nProject 11"), _("About"), wxOK | wxICON_INFORMATION, this); } void MainPanel::OnCompute(wxCommandEvent& WXUNUSED(event)) { int startT; int endT; int newEndT; double tD; wxString startTString = start->GetValue(); wxString endTString = end->GetValue(); startT = wxAtoi(startTString); endT = wxAtoi(endTString); pC->GetValue().ToDouble(&tD); first.SetStartTime(startT); first.SetEndTime(endT); first.SetTimeDiff(tD); try { first.ValidateData(); newEndT = first.ComputeEndTime(); *newEnd << newEndT; } catch (BaseException& e) { wxMessageBox(_(e.GetMessage()), _("Something Went Wrong!"), wxOK | wxICON_INFORMATION, this); } } void MainPanel::OnClear(wxCommandEvent& WXUNUSED(event)) { start->Clear(); end->Clear(); pC->Clear(); newEnd->Clear(); }

    Read the article

  • Recursively CVS add files/directories and ignore existing CVS files.

    - by meder
    There's a similar post @ http://stackoverflow.com/questions/5071/how-to-add-cvs-directories-recursively However, trying out some of the answers such as: find . -type f -print0| xargs -0 cvs add Gave: cvs add: cannot open CVS/Entries for reading: No such file or directory cvs [add aborted]: no repository And find . \! -name 'CVS' -and \! -name 'Entries' -and \! -name 'Repository' -and \! -name 'Root' -print0| xargs -0 cvs add Gave: cvs add: cannot add special file `.'; skipping Does anyone have a more thorough solution to recursively adding new files to a CVS module? It would be great if I could alias it too in ~/.bashrc or something along those lines. And yes, I do know that it is a bit dated but I'm forced to work with it for a certain project otherwise I'd use git/hg.

    Read the article

  • win7 amd64 guest in kvm does not have sound

    - by davidshen84
    hi, my host system is gentoo amd64, guest system is win 7 amd64. the guest system can work, except it does not have sound. i start kvm with -soundhw ac97, QEMU_AUDIO_DRV='alsa', and after i get into the guest system, i can see a 'Multimedia Audio Controller' in the device manager. but win7 cannot find the driver for it. i searched the network for a long time, and i cannot find a driver for intel ac97 for win7 amd64. i also tried -soundhw sb16, es1370, none of them work. please help me fix this.

    Read the article

  • Read cookies in silverlight

    - by Dharam Narayan
    hi, I have an ASP.NET MVC application. In this after user get Sign in .We set the a cookie for the user who logged in using FormsAuthentication.SetAuthCookie(userName, false). In other page we get the Cookies using the FormsAuthentication.GetAuthCookie(userName]) . This cookie values as string is then set in the Response.Cookies["username"].Value = cookiesvalue . We have .aspx page in the same application that downloads silverlight application .Silverlight reads the cookies using the code string[] cookies = HtmlPage.Document.Cookies.Split(';'); The problem is that once session expires in the application,silverlight cannot read the cookie value. After the session expires we again set the cookies in headers using the Response.Cookies["username"].Value = cookiesvalue . But still silverlight application cannot read this cookie . Thanks in Advance DNM

    Read the article

  • Updating Visual FoxPro from SQL Server

    - by David Stein
    I'm trying to update some simple Visual FoxPro tables with SQL Server. I've created a linked server with the following: sp_addlinkedserver @server = 'UTIL', @srvproduct = 'VFP', @provider = 'VFPOLEDB', @datasrc = 'L:\M2MDATA\Util\util.dbc' GO And the following works: select * from UTIL...utcomp However, I cannot use the following statement: update util...utcomp set fmaddress = '123 Elvis Dr.' where fcsqldb = 'M2MDATA01' I receive the error: OLE DB provider "VFPOLEDB" for linked server "util" returned message "Multiple-step OLE DB operation generated errors. Check each OLE DB status value, if available. No work was done.". Msg 7333, Level 16, State 2, Line 2 Cannot fetch a row using a bookmark from OLE DB provider "VFPOLEDB" for linked server "util". I have the latest version (9.0) installed so I should have the latest provider. Am I doing something wrong? Is it not possible to update VFP from SQL?

    Read the article

  • Reference DLLs not loading in Visual Studio 2010

    - by Adam Haile
    I'm working on a C# 4.0 project in VS2010 and needed to use some older DLLs containing controls that were created in C# 3.5 on VS2008. When I first add the DLLs to the references, I was able to see the namespace via intellisense and create an instance of one of the controls, but when I go to build, it gives me the following error: The type or namespace name 'BCA' could not be found (are you missing a using directive or an assembly reference?) And I do have a using directive for that namespace already, which is now underlined in red, showing that VS cannot find it. And now, intellisense won't pick up that namespace at all. I even tried added the controls to the toolbox (which worked) but then when I drag them to the GUI, it says that it cannot locate the DLL reference, even though it obviously knows where it is. I even tried changing the target framework to 3.5, but still with the same results. Any thoughts as to why this could be happening?

    Read the article

  • NoSuchPortException using RXTX Java library on Windows?

    - by Steve
    I have followed the instructions to setup rxtx on windows from http://www.jcontrol.org/download/readme_rxtx_en.html. What I did exactly was copy rxtxSerial.dll to "C:\Program Files\Java\jdk1.6.0_07\jre\bin" and copied RXTXcomm.jar to "C:\Program Files\Java\jdk1.6.0_07\jre\lib\ext" (my JAVA_HOME variable is set to C:\Program Files\Java\jdk1.6.0_07\jre) I also added RXTXcomm.jar to my eclipse project. But when I run it, it still says "NoSuchPortException" Devel Library ========================================= Native lib Version = RXTX-2.0-7pre1 Java lib Version = RXTX-2.0-7pre1 java.lang.ClassCastException: gnu.io.RXTXCommDriver cannot be cast to gnu.io.CommDriver thrown while loading gnu.io.RXTXCommDriver gnu.io.NoSuchPortException at gnu.io.CommPortIdentifier.getPortIdentifier(CommPortIdentifier.java:218) at TwoWaySerialComm.connect(TwoWaySerialComm.java:20) at TwoWaySerialComm.main(TwoWaySerialComm.java:107) In my java file, I tell it: try { (new TwoWaySerialComm()).connect("COM4"); } and I've also tried the Java Comm API. Both cannot recognize my serial port but I am sure I followed the instruction correctly. There files are there. Does anybody have any idea what it could be?

    Read the article

  • Problem restoring a SQL Server backup

    - by Elmex
    I have a SQL Server 2008, which is part of a domain. Now I make a backup of a database of this server and restore it on a SQL Server, which is not part of a domain. I have an C# application, which uses this database. On the NON-Domain machine I get now exceptions like this: "Cannot execute as the database prinzipal because the principial "dbo" does not exist, this type of principal cannot be impersonatedm or you don not have the permission" I think, the problem is, that the database owner is a domain user and this user doesn't exist on the target machine (backup machine)!? How can I solve this ?

    Read the article

  • Edit and Continue does not Work in VS 2010 / ASP.Net MVC 2

    - by Eric J.
    Although Enable Edit and Continue is checked on the Web tab of my ASP.Net MVC 2 project, I cannot in fact change the source code while running. For example, if I try to edit a controller while paused in the debugger, I cannot change the file (acts as if read only). I found a related post Edit and continue in ASP.NET web projects, however The answers seem to suggest I should be able to at least edit the code, then reload the page to see the result. I don't know what the distinction is between a Web Application and Web Site projects Any guidance is appreciated.

    Read the article

  • ASP.NET - Missing #includes cause compilation errors: Failed to map the path '...'

    - by frankadelic
    I have an ASP.NET application which features some server-side includes. For example: <!--#include virtual="/scripts.inc" --> These files are not present in my ASP.NET website project because my website starts in a virtual directory: /path-to-my-application When I choose Build Web Site, I get this error: Failed to map the path '/scripts.inc' Visual Studio cannot resolve these include files that are defined at the root directory level. They are not visible in the website project. Aside from manually commenting out the #include references, is there any way I can get the website to build? Can I force Visual Studio to ignore those errors and compile the site? Once the website is pushed out to IIS, there is no problem, because all the #include files are in place. NOTE - Web Controls are not an option for this application. Please assume #include files are a requirement. Also, I cannot move the include files since they are used by other applications.

    Read the article

  • Can dummy objects be simulated on iphone?

    - by Mike
    I come from 3D animation and one of the basic things all 3D software have is the ability to create dummy objects. Dummy objects can be used to groups objects that can be rotated, moved or scaled together around a specific anchor point. This is the idea of what I am asking. Obviously we can have fake dummies by using a view and put other views as subviews, but this has problems as the view receives clicks and sometimes you don't want it to do so. You cannot change the anchorpoint of a view too. So, the dummies as I ask have, at least, these properties: adjustable anchor point it is not clickable it is totally invisible (cannot be rendered). any scale, rotation and translation of a dummy are propagated to the grouped objects considering the dummy's anchor point. it is totally animatable. Can this be simulated on iPhone? Is there any object that can be created to simulate this? thanks.

    Read the article

  • Why isn't my assets folder being installed on emulator?

    - by Brad Hein
    Where are my assets being installed to? I utilize an assets folder in my new app. I have two files in the folder. When I install my app on the emulator, I cannot access my assets, and furthermore I cannot see them on the emulator filesystem. Extracted my apk and confirmed the assets folder exists: $ ls -ltr assets/ total 16 -rw-rw-r--. 1 brad brad 1050 2010-05-20 00:33 schema-DashDB.sql -rw-rw-r--. 1 brad brad 9216 2010-05-20 00:33 dash.db On the emulator, no assets folder: # pwd /data/data/com.gtosoft.dash # ls -l drwxr-xr-x system system 2010-05-20 00:46 lib # I just want to package a pre-built database with my app and then open it to obtain data when needed. Just tried it on my Moto Droid, unable to access/open the DB, just like the emulator: DBFile=/data/data/com.gtosoft.dash/assets/dash.db Building the DB on the fly from a schema file is out of the question because its such a slow process (about 5-10 statements per second is all I get for throughput).

    Read the article

  • Labeling connections using Oracle Universal Connection Pool (UCP) doesn't seem to work

    - by sapporo
    I'm evaluating the Oracle Universal Connection Pool (UCP) included with their 11.2 JDBC driver, but cannot get the labeling functionality to work properly (which I'd like to use to minimize the amount of ALTER SESSION statements). I'm basically following their sample code to implement the ConnectionLabelingCallback interface. The callback method cost(Properties reqLabels, Properties currentLabels) is called as expected, so the callback seems to be configured correctly. However, the other callback method configure(Properties reqLabels, Object conn) is passed arguments that make it impossible to provide a meaningful implementation: reqLabels is always null, so the callback method has no way of knowing what labels need to be set. conn is of type oracle.jdbc.driver.T4CConnection, which cannot be cast to LabelableConnection, so the callback method has no way of inspecting or manipulating the labels attached to the connection. My code is running on Java 6 and using ojdbc6.jar and ucp.jar version 11.2.0.1.0. The database is actually 10.2, but that should not be a problem according to the docs, since the UCP functionality is completely implemented in the JDBC driver. Thanks in advance!

    Read the article

< Previous Page | 348 349 350 351 352 353 354 355 356 357 358 359  | Next Page >