Search Results

Search found 22301 results on 893 pages for 'software sources'.

Page 803/893 | < Previous Page | 799 800 801 802 803 804 805 806 807 808 809 810  | Next Page >

  • Deploying and publishing my first asp.net mvc 3 web application

    - by john G
    I want to deploy and publish my first asp.net mvc 3 web application at my client side (the client is a small office with 2-4 employees that need to access the application) , currently i finished developing my web application using the free Microsoft Visual Web Developer 2010 express and the free SQL Server 2008 R2 Express database . So my concerns are :- 1. The free sql express database that i am currently using have a limitation of 10 GB i think So i want to buy SQL server for small business to remove the database limitations. So my questions that i need help with them are:- **1. If i use the sql server for small business then will my web application have other limitations on production i am unaware of ? 2. Will be using SQL server for small business my right choice? baring in my that the system will be used by 2-4 clients only? 3. How much does “approximate ” the sql server database will cost in US dollars? 4. Are there any other software that i need to buy to be able to deploy and publish the application on intranet and the internet ?** Appreciate any help Best Regards

    Read the article

  • Make eix available version match emerge

    - by Ryaner
    We have out Gentoo hosts using a binhost with EMERGE_DEFAULT_OPTS="--getbinpkgonly --usepkgonly" in the make.conf file so that the host only pulled down the binary hosts. All works well from that side. I use eix to check on software versions for upgrades but have hit a problem where eix will see an available version ahead of what is available on the binserver. Using glibc as an example ietpl [VE] / # emerge -s glibc Searching... [ Results for search key : glibc ] [ Applications found : 1 ] * sys-libs/glibc Latest version available: 2.14.1-r3 Latest version installed: 2.14.1-r3 Homepage: Description: GNU libc6 (also called glibc2) C library License: LGPL-2 Then eix reports a higher version available ietpl [VE] / # export LASTVERSION='{last}<version>{}' ietpl [VE] / # /usr/bin/eix --nocolor --format '<category> <name> [<installedversions:LASTVERSION>] [<bestversion:LASTVERSION>] \n' --exact --category-name sys-libs/glibc sys-libs glibc [2.14.1-r3] [2.15-r2] What I'm after is for eix to report the latest version available as 2.14.1-r3 like emerge. I've a feeling this is possible since without any formatting, eix returns Available versions: (2.2) ~2.9_p20081201-r3!s 2.10.1-r1!s 2.11.3!s ~2.12.1-r3!s 2.12.2!s{tbz2} ~2.13-r2!s 2.13-r4!s ~2.14!s ~2.14.1-r2!s 2.14.1-r3!s{tbz2} ~2.15-r1!s 2.15-r2!s ~2.15-r3!s **2.16.0!s **9999!s correctly tagging the latest unmasked binary package with {tbz2} I would have thought that the binary flag would do it, but that returns no matches --binary Match packages with *.tbz2 files.

    Read the article

  • Is there any any merit to routinely restore a linux system, even if unnecessary?

    - by field_guy
    I do fieldwork with a number of computers running ubuntu performing critical tasks doing fieldwork. The computers are similarly configured with slight variations. Since we've had some configuration issues in the past, my boss is pressing for us to take an image of the installation on each computer, and restore each computer to that image before they are to go into the field. My preferred solution would be to write a common script that checks to ensure that the configuration of the system is correct and that the system is operational. If the computer has been verified, isn't restoring it to that configuration redundant? And are there any inherent problems with doing so? My reluctance stems from the fact that our software and configuration is subject to change in the field, but these changes must be made across all the computers. That means that when a change is made, all the restoration images have to be updated as well. The differences in the configuration of each of the computers live in /etc. In the event that restoration is required, I would prefer to keep a single image containing everything that is common to all machines, and have a snapshot of each computer's /etc directory to be used for restoring the state of that particular machine. What's the better approach?

    Read the article

  • How can I use apt-get to resolve package dependencies when there are multiple versions in the repository?

    - by user1165144
    I've package a-package.deb which depends on b-package.deb in version 1.0. Everything works fine. But now a b-package in version 1.1 gets added to the repository. I'd suspect that apt-get installs the a-package and version 1.0 of the b-package. What really happens is, that a-package won't get installed: # apt-get install a-package Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: a-package : Depends: b-package (= 1.0) but 1.1 is to be installed E: Unable to correct problems, you have held broken packages. Is there a workaround to fix the behavior? Is there other software to use, that can handle the dependencies as defined?

    Read the article

  • I just deleted my backup file! How do I save it?

    - by Sammy
    I just accidentally deleted a backup file that I need to restore my system. It's an Acronis True Image TIB file. It was stored at H:\My backups and the name of the file was File_backup_2012-10-18.tib. I did a quick scan with Recuva 1.43.623 and it found the file using the recovery wizard, but it was unable to recover it. The "state" of the file is "unrecoverable". So the resulting file is 0 byte. I am trying to do a deep scan with Recuva right now but it takes a lot of time. If it should fail, what other recovery option do I have? Is there any other good file recovery software that's free to use for home users? I do have a second copy of the whole system partition, but I needed this file backup copy because it is more up to date. That's the file, right there! But why is Recuva unable to recover it?

    Read the article

  • Looking for VCS wrapper that tracks system files changing across the whole *nix OS and sends diffs through email

    - by nextus
    I need some software that looks after custom directories across the whole OS (i.e. /etc) and alerting me if someone edit something file inside. Additionally, this tool must automatically commit and push changes into backup server, so I can easily determine when specific change in specific file was made. I'm using cvsbackup right now but I want to create or found something more modern. I think using git as VCS is a great idea. I could have local repository and easily revert changes in my configuration files. Furthermore, pushing changes to the remote repository would helps me to recover my configuration files when the server is fault. It doesn't seems difficult to write some wrapper around the git but there are a lot of problems. For example, I need to track custom directories: /usr/local/nginx/ and /etc/. So the destination point for my git repository is /. I don't need to track the other directories so I must to write overwhelming .gitignore rule: * !.gitignore !/etc/ !etc/* !/usr /usr/* !/usr/local /usr/local/* !/usr/local/nginx !/usr/local/nginx/* It's very daunting and prone to error. So it's maybe a good idea to create intermediate file that wrapper reads and converts to .gitignore format. Additionally, I don't want to keep my .git folder in / partition so I need to set appropriate GIT_DIR and GIT_WORK_TREE variables for git. Is there any ready to use tools for implementation this task? I don't found any but I don't believe that no one needs this feature.

    Read the article

  • Seeking faster access/transfer times for accounting application

    - by Markaway
    Our accounting software, Sage 50, has been getting slower to open on workstations and reading the company file. The company file only contains 2 years worth of transactions, and we just cleared out 2011 so the file size has gotten a lot smaller. There are 10 users, 6 of which are on it all day, 4 are on and off throughout the day. Our network is entirely GbE and the switches are set to prioritize traffic on that port number. Watching network traffic, we barely use 40% of the network capability on the workstation, so I don't think that is our bottleneck. Our server contains two older Raptors Sata 2(3GB/s) 150GB in RAID 1. We were considering switching to SSD's, but a lot of what I read says to stay away from MLC's, especially for production environment and definitely avoid putting them in a RAID config. So would upgrading to newer Raptors with SATA 3(6GB/s) offer noticable benefits? What other options are out there that aren't so expensive? Trying to keep it to 200-300 per drive. We need at least 150GB, but going to 250-300GB would be better as it gives us more room to grow. We have about 30% space remaining on what we have now.

    Read the article

  • How do I change the default ftp folder in Mac OS X 10.6?

    - by Wild_Eep
    I'm running WordPress 2.9.1 from a Mac running 10.6.3. WordPress is installed to the /Library/WebServer/Documents folder. WordPress has a feature called Auto Update. Clicking an auto update button will download and install updated versions of the WordPress software, or third-party plugin tools. It's a convenient way to keep things up to date. WordPress uses FTP to download the files. I've enabled FTP and set up a user account and opened the requisite ports in my firewall for FTP traffic. This doesn't seem to be enough for my self-hosted installation, though. I'm sure this feature was originally designed for someone who has access to a remote shared webserver, and that it's merely a configuration challenge related to the FTP setup. I feel that if I can adjust the initial directory that the FTP service presents to the AutoUpdate feature, everything else will work properly. So, my question is, how do I adjust what folder is presented when a given user connects to a Mac running 10.6.3 via FTP?

    Read the article

  • BYOD (accessing files) on a domain without joining?

    - by Philip White
    I run a Samba 4 instance at a small private school. This makes a regular Linux server appear as a directory controller. There are two relevant benefits to this: I have a Samba share for people's documents, and I use the Redirected Folders feature to allow any employee to sit down at any PC, log in with their domain credentials, and their My Documents points to network storage. Everyone has a mapped drive (using Group Policy Preferences) to a share specific to their account type. Students can access one share (one share for all students), teachers have another, and office staff have another. However, I would like to allow BYOD (Bring Your Own Device). Some employees are already asking for it with their personal laptops, and I know eventually most everyone will want to. Is there any way to replicate the two features above without having to join PCs to the domain? Joining personal PCs is impractical if only because only professional editions of Windows support this. Ideally, any operating system (including mobile) could access the relevant shares, but of course Windows is key. Offline caching is optional. (I could set up OpenVPN for teachers who want to access their files from home.) The problem with simply giving SSH access to the relevant shares is primarily that Samba 4 relies on ext4 ACLs and ext4 extended attributes to maintain NTFS permissions. Writing files directly to the Linux server would bypass this and would (probably) not be interoperable with Samba4. Right now I am completely flexible. I am even fine with scrapping the whole domain and using some other software for the two features above. How can I allow school employees and students freedom to securely share files without requiring everyone to have specific editions of Windows?

    Read the article

  • Can't login to Windows server 2008 (as any user, not even locally, not in safe mode but I have right credentials)

    - by Saix
    Just from nowhere I can't login to my Windows server 2008 machine. All the services like FTP server or webserver (which I'm actually not using, just remote desktop and FTP) are running. Whatever credentials I try (even/especialy administrator), it always says Unknown Username or bad password. I have already tried hard turn off/on and safe mode without luck. Also I already tried type in login name as SERVER NAME\user or Workgroup\user (every case sensitive scenario), still says I have wrong login. Usually we are using remote desktop to access the machine but local access over KVM doesn't work either. Now I'm lock out of any control or any way to do something. There's just logon screen preceding by ctrl+alt+del to login alert. Without me able to login I can't actually try to fix anything. Can't find much more on Internet except the SERVER NAME\user thing. Reinstall would be the last resort but I can't let things this way for much longer anyway. This server is vital. If it would be any help, I think automatic Windows updates are turned off and there were no updates or newly installed software for last couple years and just few soft restarts, non of them recently. It happened during it's runtime while all other services were still up and running, so this couldn't be just some Windows nasty screw up during boot or something. What could have possibly changed? What are my options now?

    Read the article

  • VM automatic provisioning advice

    - by jdgregson
    In my lab we have 24 workstations, each with five technician-maintained virtual machines set up in VMware Workstation. These provide a lot of management overhead, as we have to update them as well as the host operating systems every three months (the start of the next quarter), which adds up to 144 systems to update instead of just 24. Whenever we need to reimage the hosts, the VMs add another 130GB to each image, which is over 3TB of extra data to send over the network, and a lot more time to apply each image, and then we still have to boot all 120 VMs and assign them a unique IP Address and host names. We would like to get the VMs off the hosts and onto a server, but after looking around for a few days, I still don't know where to begin looking for a solution. There may be a better way to do this, but in my mind, the ideal solution would be to replace the VMs on the host machines with five Thin Client operating systems, each configured to connect to a server and be sent or connected to a unique virtual machine. We can't have 120 VMs running on the server all the time, so the server would have to create a copy of the VM from a template whenever a student tries to boot one, and destroy the VM after the student is finished with it. If there is another client application that has to be installed on the hosts that would be fine, the only reason I'd like to keep them in VMware Workstation is because students already know to look there for the VMs when they need to use them. What, if any, virtualization software will allow this? Is there some other solution I'm not seeing?

    Read the article

  • How to merge Windows registry hives directly without converting them to an intermediate text based file?

    - by Registrar
    Help! I'm going to get fired if I can't figure out how to do this by tomorrow. Microsoft Windows stores its registry databases (known as "registry hives" - there's actually a backstory to the origin of this name, but I digress) in a proprietary binary format. Answer this correctly or you lose your job: Let H-sub-A be the registry hive of Computer A, and let H-sub-B be the registry hive of Computer B. Create a registry hive H-sub-A-prime (in the native binary format) that contains all of the registry keys and values in both H-sub-A and H-sub-B. If there is overlap, let the value from H-sub-B overwrite the value in H-sub-A. Sure, you can import a text-based patch file (e.g., "FOO.REG") to modify the registry, but can you merge two registry hives in their native binary format? Answers that involve exporting the registry to a text file (e.g., "FOO.REG") will receive no credit. You may only use software included with Microsoft Windows (any version) and / or third-party tools that are free of charge.

    Read the article

  • Windows 7 constantly accessing hard drive [duplicate]

    - by Zohar
    Possible Duplicate: Tool which finds which process is causing the heavy hard drive activity? Did you notice that on Windows 7 (I use 64-bit) the hard drive LED is constantly blinking, which means that the OS is constantly wearing the hard drive by accessing it? It's something related to the system process, and it even occurs in safe mode, so I don't think it's a third party software problem. Has anyone experienced this problem as well, and is it a Windows problem, or caused by something else? Edit: My indexing service is reduced to indexing only the Start Menu. Even if it was set for the whole computer, it would eventually stop; that's not it. My friends also suffer from the same problem. Please answer my first question: have any of you have seen a Windows 7 machine whose hard drive LED is at rest? I'm also trying to track down the offending process using procmon and Resource Monitor, and it actually seems like a system process. It could also be svchost.exe, and I'm not sure which file they are accessing since I see a lot of activity which I can't make sense of. It's loading system DLLs, accessing registry keys, and other nonsense.

    Read the article

  • Would a USB hub work in reverse?

    - by Tim
    Imagine for a moment with a 4 port USB hub. Normally how this would work is the hub has one plug that goes to the computer, then 4 ports that you can plug in other things to (thumb drive, keyboard, mouse etc). I am wondering if I can use it in reverse. So I would have 1 keyboard going in to the hub, and then plug in male to male usb cables from the 4 ports to 4 different PCs, my aim is that when a key is pressed on the keyboard all 4 PCs will receive it as if the keyboard were plugged in to them. Does anyone know if this would work? And if not does anyone have any ideas how I could get the same effect? EDIT: So I am looking for more of a KVM switch type device rather than a USB hub. However all of the KVM switches I've found use some sort of mechanism to select which computer you'll be using. (some are physical switches / buttons, others do it via software "automatically" some how) However I need to have 1 keyboard hooked up to 2 computers and when I press a key on the keyboard I want the keypress to be sent to both computers simultaneously, not to one or the other. Does anyone know if KVMs with this feature exist?

    Read the article

  • Need Help Setting an Image with Transparent Background to Clipboard

    - by AMissico
    I need help setting a transparent image to the clipboard. I keep getting "handle is invalid". Basically, I need a "second set of eyes" to look over the following code. (The complete working project at ftp://missico.net/ImageVisualizer.zip.) This is an image Debug Visualizer class library, but I made the included project to run as an executable for testing. (Note that window is a toolbox window and show in taskbar is set to false.) I was tired of having to perform a screen capture on the toolbox window, open the screen capture with an image editor, and then deleting the background added because it was a screen capture. So I thought I would quickly put the transparent image onto the clipboard. Well, the problem is...no transparency support for Clipboard.SetImage. Google to the rescue...not quite. This is what I have so far. I pulled from a number of sources. See the code for the main reference. My problem is the "invalid handle" when using CF_DIBV5. Do I need to use BITMAPV5HEADER and CreateDIBitmap? Any help from you GDI/GDI+ Wizards would be greatly appreciated. public static void SetClipboardData(Bitmap bitmap, IntPtr hDC) { const uint SRCCOPY = 0x00CC0020; const int CF_DIBV5 = 17; const int CF_BITMAP = 2; //'reference //'http://social.msdn.microsoft.com/Forums/en-US/winforms/thread/816a35f6-9530-442b-9647-e856602cc0e2 IntPtr memDC = CreateCompatibleDC(hDC); IntPtr memBM = CreateCompatibleBitmap(hDC, bitmap.Width, bitmap.Height); SelectObject(memDC, memBM); using (Graphics g = Graphics.FromImage(bitmap)) { IntPtr hBitmapDC = g.GetHdc(); IntPtr hBitmap = bitmap.GetHbitmap(); SelectObject(hBitmapDC, hBitmap); BitBlt(memDC, 0, 0, bitmap.Width, bitmap.Height, hBitmapDC, 0, 0, SRCCOPY); if (!OpenClipboard(IntPtr.Zero)) { throw new System.Runtime.InteropServices.ExternalException("Could not open Clipboard", new Win32Exception()); } if (!EmptyClipboard()) { throw new System.Runtime.InteropServices.ExternalException("Unable to empty Clipboard", new Win32Exception()); } //IntPtr hClipboard = SetClipboardData(CF_BITMAP, memBM); //works but image is not transparent //all my attempts result in SetClipboardData returning hClipboard = IntPtr.Zero IntPtr hClipboard = SetClipboardData(CF_DIBV5, memBM); //because if (hClipboard == IntPtr.Zero) { // InnerException: System.ComponentModel.Win32Exception // Message="The handle is invalid" // ErrorCode=-2147467259 // NativeErrorCode=6 // InnerException: throw new System.Runtime.InteropServices.ExternalException("Could not put data on Clipboard", new Win32Exception()); } if (!CloseClipboard()) { throw new System.Runtime.InteropServices.ExternalException("Could not close Clipboard", new Win32Exception()); } g.ReleaseHdc(hBitmapDC); } } private void __copyMenuItem_Click(object sender, EventArgs e) { using (Graphics g = __pictureBox.CreateGraphics()) { IntPtr hDC = g.GetHdc(); MemoryStream ms = new MemoryStream(); __pictureBox.Image.Save(ms, ImageFormat.Png); ms.Seek(0, SeekOrigin.Begin); Image imag = Image.FromStream(ms); // Derive BitMap object using Image instance, so that you can avoid the issue //"a graphics object cannot be created from an image that has an indexed pixel format" Bitmap img = new Bitmap(new Bitmap(imag)); SetClipboardData(img, hDC); g.ReleaseHdc(); } }

    Read the article

  • Perl CGI that sends a temporary loading page to client then later sends the actual results page

    - by Kurt W. Leucht
    I've wasted at least a half day of my company's time searching the Internet for an answer and I'm getting wrapped around the axle here. I can't figure out the difference between all the different technology choices (long polling, ajax streaming, comet, XMPP, etc.) and I can't get a simple hello world example working on my PC. I am running Apache 2.2 and ActivePerl 5.10.0. JavaScript is completely acceptable for this solution. All I want to do is write a simple Perl CGI script that when accessed, it immediately returns some HTML that tells the user to wait or maybe sends an animated GIF. Then without any user intervention (no mouse clicks or anything) I want the CGI script to at some time later replace the wait message or the animated GIF with the actual HTML results from their query. I know this is simple stuff and websites do it all the time, but I can't find a single working example that I can cut and paste onto my machine that will work. Here is my simple Hello World example that I've compiled from various Internet sources, but it doesn't seem to work. When I refresh this CGI URL in my web browser it prints nothing for 5 seconds, then it prints the PLEASE BE PATIENT web page, but not the results web page. What am I doing wrong? #!C:\Perl\bin\perl.exe use CGI; use CGI::Carp qw/fatalsToBrowser warningsToBrowser/; sub Create_HTML { my $html = <<EOHTML; <html> <head> <meta http-equiv="pragma" content="no-cache" /> <meta http-equiv="expires" content="-1" /> <script type="text/javascript" > var xmlhttp=false; /*@cc_on @*/ /*@if (@_jscript_version >= 5) // JScript gives us Conditional compilation, we can cope with old IE versions. // and security blocked creation of the objects. try { xmlhttp = new ActiveXObject("Msxml2.XMLHTTP"); } catch (e) { try { xmlhttp = new ActiveXObject("Microsoft.XMLHTTP"); } catch (E) { xmlhttp = false; } } @end @*/ if (!xmlhttp && typeof XMLHttpRequest!='undefined') { try { xmlhttp = new XMLHttpRequest(); } catch (e) { xmlhttp=false; } } if (!xmlhttp && window.createRequest) { try { xmlhttp = window.createRequest(); } catch (e) { xmlhttp=false; } } </script> <title>Ajax Streaming Connection Demo</title> </head> <body> Some header text. <p> <div id="response">PLEASE BE PATIENT</div> <p> Some footer text. </body> </html> EOHTML return $html; } my $cgi = new CGI; print $cgi->header; print Create_HTML(); sleep(5); print "<script type=\"text/javascript\">\n"; print "\$('response').innerHTML = 'Here are your results!';\n"; print "</script>\n";

    Read the article

  • How to get maven gwt 2.0 build working

    - by Pieter Breed
    EDIT: Added some of the output of the mvn -X -e commands at the end My company is developing a GWT application. We've been using maven 2 and GWT 1.7 successfully for quite a while. We recently decided to upgrade to GWT 2.0. We've already updated the eclipse project and we are able to successfully run the application in dev-mode. We are struggling to get the application built using maven though. I'm hoping somebody can tell me what I'm doing wrong here since I'm running out of time on this. The exacty bit of the output that worries me is the 'GWT compilation skipped' message: [INFO] Copying 119 resources [INFO] [compiler:compile {execution: default-compile}] [INFO] Compiling 704 source files to K:\iCura\assessor\target\classes [INFO] [gwt:compile {execution: default}] [INFO] using GWT jars for specified version 2.0.0 [INFO] establishing classpath list (scope = compile) [INFO] com.curasoftware.assessor.Assessor is up to date. GWT compilation skipped [INFO] [jspc:compile {execution: jspc}] [INFO] Built File: \index.jsp I'm pasting the gwt-maven-plugin section below. If you need anything else please ask. <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>gwt-maven-plugin</artifactId> <version>1.2</version> <configuration> <localWorkers>1</localWorkers> <warSourceDirectory>${basedir}/war</warSourceDirectory> <logLevel>ALL</logLevel> <module>${cura.assessor.module}</module> <!-- use style OBF for prod --> <style>OBFUSCATED</style> <extraJvmArgs>-Xmx2048m -Xss1024k</extraJvmArgs> <gwtVersion>${version.gwt}</gwtVersion> <disableCastChecking>true</disableCastChecking> <soyc>false</soyc> </configuration> <executions> <execution> <goals> <!-- plugin goals --> <goal>clean</goal> <goal>compile</goal> </goals> </execution> </executions> </plugin> I executed mvn clean install -X -e and this is some of the output that I get: [DEBUG] Configuring mojo 'org.codehaus.mojo:gwt-maven-plugin:1.2:compile' --> [DEBUG] (f) disableCastChecking = true [DEBUG] (f) disableClassMetadata = false [DEBUG] (f) draftCompile = false [DEBUG] (f) enableAssertions = false [DEBUG] (f) extra = K:\iCura\assessor\target\extra [DEBUG] (f) extraJvmArgs = -Xmx2048m -Xss1024k [DEBUG] (f) force = false [DEBUG] (f) gen = K:\iCura\assessor\target\.generated [DEBUG] (f) generateDirectory = K:\iCura\assessor\target\generated-sources\gwt [DEBUG] (f) gwtVersion = 2.0.0 [DEBUG] (f) inplace = false [DEBUG] (f) localRepository = Repository[local|file://K:/iCura/lib] [DEBUG] (f) localWorkers = 1 [DEBUG] (f) logLevel = ALL [DEBUG] (f) module = com.curasoftware.assessor.Assessor [DEBUG] (f) project = MavenProject: com.curasoftware.assessor:assessor:3.5.0.0 @ K:\iCura\assessor\pom.xml [DEBUG] (f) remoteRepositories = [Repository[gwt-maven|http://gwt-maven.googlecode.com/svn/trunk/mavenrepo/], Repository[main-maven|http://www.ibiblio.org/maven2/], Repository[central|http://repo1.maven.org/maven2]] [DEBUG] (f) skip = false [DEBUG] (f) sourceDirectory = K:\iCura\assessor\src [DEBUG] (f) soyc = false [DEBUG] (f) style = OBFUSCATED [DEBUG] (f) treeLogger = false [DEBUG] (f) validateOnly = false [DEBUG] (f) warSourceDirectory = K:\iCura\assessor\war [DEBUG] (f) webappDirectory = K:\iCura\assessor\target\assessor [DEBUG] -- end configuration -- and then this: [DEBUG] SOYC has been disabled by user [DEBUG] GWT module com.curasoftware.assessor.Assessor found in K:\iCura\assessor\src [INFO] com.curasoftware.assessor.Assessor is up to date. GWT compilation skipped [DEBUG] com.curasoftware.assessor:assessor:war:3.5.0.0 (selected for null) [DEBUG] com.curasoftware.dto:dto-gen:jar:3.5.0.0:compile (selected for compile) ... It's finding the correct sourceDirectory. That folders has a 'com' folder within which ultimately is the source of the application organized in folders as per the package structure.

    Read the article

  • Installing Rails, MySQL, etc. everything goes wrong

    - by Rits
    I've been struggling with this for a few hours. Everything just stopped working and I can't get it to work anymore. I'm a noob at Ruby, Ruby on Rails and the Terminal in general. This is really frustrating me so I just try to describe my problem as detailed as possible hoping someone can give me a solution. I'm on Mac OS X Snow Leopard. I couldn't get Rails working at all just now: Could not find gem 'rails' headaches But after some tries of reinstalling it, it suddenly worked again. But now I just can't get MySQL to work, and it sometimes even breaks the Rails installation again. This is what I do: sudo gem uninstall rails sudo gem uninstall mysql sudo gem uninstall mysql2 After these commands, I check the installed gems with gem list. No MySQL gem is listed anymore, but I can still see rails (2.3.5, 2.2.2, 1.2.6) . Is this normal? Does this mean I have 3 Rails installations? It doesn't make sense to me. Anyway, then I do this: sudo gem clean Which fails completely. I get a bunch of errors like this: Attempting to uninstall fcgi-0.8.7 Unable to uninstall fcgi-0.8.7: Gem::InstallError: cannot uninstall, check gem list -d fcgi It doesn't uninstall anything. At this point, I try to install everything again. I start with: sudo gem install rails Which succeeds (I think): Successfully installed rails-3.0.3 Successfully installed builder-2.1.2 2 gems installed Installing ri documentation for rails-3.0.3... File not found: lib Then, I update RubyGems: sudo gem update --system sudo gem install rubygems-update sudo update_rubygems Then it says I have 1.3.7 installed, so it succeeded, I think. So now I proceed with installing MySQL. I already got MySQL 5.5.8 installed on my machine. I did some research about installing MySQL on Snow Leopard, and it seems I have to use this command: sudo env ARCHFLAGS="-arch x86_64" gem install mysql -- --with-mysql-config=/usr/local/mysql/bin/mysql_config I get a bunch of errors like this: No definition for time_set_neg No definition for time_set_second_part No definition for time_equal No definition for error_errno At this point, I assume I got both Rails and the MySQL gem installed, so I try to start a new project. rails new user_group -d mysql It works! Rails is installed correctly. Now, I try generating a model. cd user_group rails generate model User It fails with this error: Could not find gem 'mysql2 (= 0, runtime)' in any of the gem sources listed in your Gemfile. Try running bundle install. So I try running bundle install. It installs a lot of gems. Then I try to generate my model again. I get this error: Library/Ruby/Gems/1.8/gems/mysql2-0.2.6/lib/mysql2/mysql2.bundle: dlopen(/Library/Ruby/Gems/1.8/gems/mysql2-0.2.6/lib/mysql2/mysql2.bundle, 9): Library not loaded: libmysqlclient.16.dylib (LoadError) Referenced from: /Library/Ruby/Gems/1.8/gems/mysql2-0.2.6/lib/mysql2/mysql2.bundle Reason: image not found - /Library/Ruby/Gems/1.8/gems/mysql2-0.2.6/lib/mysql2/mysql2.bundle This is as far as I can get. What should I do? And why should this be so hard...

    Read the article

  • WPF Combobox binding: can't change selection.

    - by SteveCav
    After wasting hours on this, following on the heels of my Last Problem, I'm starting to feel that Framework 4 is a master of subtle evil, or my PC is haunted. I have three comboboxes and a textbox on a WPF form, and I have an out-of-the-box Subsonic 3 ActiveRecord DAL. When I load this "edit record" form, the comboboxes fill correctly, they select the correct items, and the textbox has the correct text. I can change the TextBox text and save the record just fine, but the comboboxes CANNOT BE CHANGED. The lists drop down and highlight, but when you click on an item, the item selected stays the same. Here's my XAML: <StackPanel Orientation="Horizontal" Margin="10,10,0,0"> <TextBlock Width="80">Asset</TextBlock> <ComboBox Name="cboAsset" Width="180" DisplayMemberPath="AssetName" SelectedValuePath="AssetID" SelectedValue="{Binding AssetID}" ></ComboBox> </StackPanel> <StackPanel Orientation="Horizontal" Margin="10,10,0,0"> <TextBlock Width="80">Status</TextBlock> <ComboBox Name="cboStatus" Width="180" DisplayMemberPath="JobStatusDesc" SelectedValuePath="JobStatusID" SelectedValue="{Binding JobStatusID}" ></ComboBox> </StackPanel> <StackPanel Orientation="Horizontal" Margin="10,10,0,0"> <TextBlock Width="80">Category</TextBlock> <ComboBox Name="cboCategories" Width="180" DisplayMemberPath="CategoryName" SelectedValuePath="JobCategoryID" SelectedValue="{Binding JobCategoryID}" ></ComboBox> </StackPanel> <StackPanel Orientation="Horizontal" Margin="10,10,0,0"> <TextBlock Width="80">Reason</TextBlock> <TextBox Name="txtReason" Width="380" Text="{Binding Reason}"/> </StackPanel> Here are the relevant snips of my code (intJobID is passed in): SvcMgrDAL.Job oJob; IQueryable<SvcMgrDAL.JobCategory> oCategories = SvcMgrDAL.JobCategory.All().OrderBy(x => x.CategoryName); IQueryable<SvcMgrDAL.Asset> oAssets = SvcMgrDAL.Asset.All().OrderBy(x => x.AssetName); IQueryable<SvcMgrDAL.JobStatus> oStatus = SvcMgrDAL.JobStatus.All(); cboCategories.ItemsSource = oCategories; cboStatus.ItemsSource = oStatus; cboAsset.ItemsSource = oAssets; this.JobID = intJobID; oJob = SvcMgrDAL.Job.SingleOrDefault(x => x.JobID == intJobID); this.DataContext = oJob; Things I've tried: -Explicitly setting IsReadOnly="false" and IsSynchronizedWithCurrentItem="True" -Changing the combobox ItemSources from IQueryables to Lists. -Building my own Job object (plain vanilla entity class). -Every binding mode for the comboboxes. The Subsonic DAL doesn't implement INotifyPropertyChanged, but I don't see as it'd need to for simple binding like this. I just want to be able to pick something from the dropdown and save it. Comparing it with my last problem (link at the top of this message), I seem to have something really wierd with data sources going on. Maybe it's a Subsonic thing?

    Read the article

  • How to overcome shortcomings in reporting from EAV database?

    - by David Archer
    The major shortcomings with Entity-Attribute-Value database designs in SQL all seem to be related to being able to query and report on the data efficiently and quickly. Most of the information I read on the subject warn against implementing EAV due to these problems and the commonality of querying/reporting for almost all applications. I am currently designing a system where almost all the fields necessary for data storage are not known at design/compile time and are defined by the end-user of the system. EAV seems like a good fit for this requirement but due to the problems I've read about, I am hesitant in implementing it as there are also some pretty heavy reporting requirements for this system as well. I think I've come up with a way around this but would like to pose the question to the SO community. Given that typical normalized database (OLTP) still isn't always the best option for running reports, a good practice seems to be having a "reporting" database (OLAP) where the data from the normalized database is copied to, indexed extensively, and possibly denormalized for easier querying. Could the same idea be used to work around the shortcomings of an EAV design? The main downside I see are the increased complexity of transferring the data from the EAV database to reporting as you may end up having to alter the tables in the reporting database as new fields are defined in the EAV database. But that is hardly impossible and seems to be an acceptable tradeoff for the increased flexibility given by the EAV design. This downside also exists if I use a non-SQL data store (i.e. CouchDB or similar) for the main data storage since all the standard reporting tools are expecting a SQL backend to query against. Do the issues with EAV systems mostly go away if you have a seperate reporting database for querying? EDIT: Thanks for the comments so far. One of the important things about the system I'm working on it that I'm really only talking about using EAV for one of the entities, not everything in the system. The whole gist of the system is to be able to pull data from multiple disparate sources that are not known ahead of time and crunch the data to come up with some "best known" data about a particular entity. So every "field" I'm dealing with is multi-valued and I'm also required to track history for each. The normalized design for this ends up being 1 table per field which makes querying it kind of painful anyway. Here are the table schemas and sample data I'm looking at (obviously changed from what I'm working on but I think it illustrates the point well): EAV Tables Person ------------------- - Id - Name - ------------------- - 123 - Joe Smith - ------------------- Person_Value ------------------------------------------------------------------- - PersonId - Source - Field - Value - EffectiveDate - ------------------------------------------------------------------- - 123 - CIA - HomeAddress - 123 Cherry Ln - 2010-03-26 - - 123 - DMV - HomeAddress - 561 Stoney Rd - 2010-02-15 - - 123 - FBI - HomeAddress - 676 Lancas Dr - 2010-03-01 - ------------------------------------------------------------------- Reporting Table Person_Denormalized ---------------------------------------------------------------------------------------- - Id - Name - HomeAddress - HomeAddress_Confidence - HomeAddress_EffectiveDate - ---------------------------------------------------------------------------------------- - 123 - Joe Smith - 123 Cherry Ln - 0.713 - 2010-03-26 - ---------------------------------------------------------------------------------------- Normalized Design Person ------------------- - Id - Name - ------------------- - 123 - Joe Smith - ------------------- Person_HomeAddress ------------------------------------------------------ - PersonId - Source - Value - Effective Date - ------------------------------------------------------ - 123 - CIA - 123 Cherry Ln - 2010-03-26 - - 123 - DMV - 561 Stoney Rd - 2010-02-15 - - 123 - FBI - 676 Lancas Dr - 2010-03-01 - ------------------------------------------------------ The "Confidence" field here is generated using logic that cannot be expressed easily (if at all) using SQL so my most common operation besides inserting new values will be pulling ALL data about a person for all fields so I can generate the record for the reporting table. This is actually easier in the EAV model as I can do a single query. In the normalized design, I end up having to do 1 query per field to avoid a massive cartesian product from joining them all together.

    Read the article

  • maven sonar problem

    - by senzacionale
    I want to use sonar for analysis but i can't get any data in localhost:9000 <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <artifactId>KIS</artifactId> <groupId>KIS</groupId> <version>1.0</version> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-antrun-plugin</artifactId> <version>1.4</version> <executions> <execution> <id>compile</id> <phase>compile</phase> <configuration> <tasks> <property name="compile_classpath" refid="maven.compile.classpath"/> <property name="runtime_classpath" refid="maven.runtime.classpath"/> <property name="test_classpath" refid="maven.test.classpath"/> <property name="plugin_classpath" refid="maven.plugin.classpath"/> <ant antfile="${basedir}/build.xml"> <target name="maven-compile"/> </ant> </tasks> </configuration> <goals> <goal>run</goal> </goals> </execution> </executions> </plugin> </plugins> </build> </project> output when running sonar: jar file is empty [INFO] Executed tasks [INFO] [resources:testResources {execution: default-testResources}] [WARNING] Using platform encoding (Cp1250 actually) to copy filtered resources, i.e. build is platform dependent! [INFO] skip non existing resourceDirectory J:\ostalo_6i\KIS deploy\ANT\src\test\resources [INFO] [compiler:testCompile {execution: default-testCompile}] [INFO] No sources to compile [INFO] [surefire:test {execution: default-test}] [INFO] No tests to run. [INFO] [jar:jar {execution: default-jar}] [WARNING] JAR will be empty - no content was marked for inclusion! [INFO] Building jar: J:\ostalo_6i\KIS deploy\ANT\target\KIS-1.0.jar [INFO] [install:install {execution: default-install}] [INFO] Installing J:\ostalo_6i\KIS deploy\ANT\target\KIS-1.0.jar to C:\Documents and Settings\MitjaG\.m2\repository\KIS\KIS\1.0\KIS-1.0.jar [INFO] ------------------------------------------------------------------------ [INFO] Building Unnamed - KIS:KIS:jar:1.0 [INFO] task-segment: [sonar:sonar] (aggregator-style) [INFO] ------------------------------------------------------------------------ [INFO] [sonar:sonar {execution: default-cli}] [INFO] Sonar host: http://localhost:9000 [INFO] Sonar version: 2.1.2 [INFO] [sonar-core:internal {execution: default-internal}] [INFO] Database dialect class org.sonar.api.database.dialect.Oracle [INFO] ------------- Analyzing Unnamed - KIS:KIS:jar:1.0 [INFO] Selected quality profile : KIS, language=java [INFO] Configure maven plugins... [INFO] Sensor SquidSensor... [INFO] Sensor SquidSensor done: 16 ms [INFO] Sensor JavaSourceImporter... [INFO] Sensor JavaSourceImporter done: 0 ms [INFO] Sensor AsynchronousMeasuresSensor... [INFO] Sensor AsynchronousMeasuresSensor done: 15 ms [INFO] Sensor SurefireSensor... [INFO] parsing J:\ostalo_6i\KIS deploy\ANT\target\surefire-reports [INFO] Sensor SurefireSensor done: 47 ms [INFO] Sensor ProfileSensor... [INFO] Sensor ProfileSensor done: 16 ms [INFO] Sensor ProjectLinksSensor... [INFO] Sensor ProjectLinksSensor done: 0 ms [INFO] Sensor VersionEventsSensor... [INFO] Sensor VersionEventsSensor done: 31 ms [INFO] Sensor CpdSensor... [INFO] Sensor CpdSensor done: 0 ms [INFO] Sensor Maven dependencies... [INFO] Sensor Maven dependencies done: 16 ms [INFO] Execute decorators... [INFO] ANALYSIS SUCCESSFUL, you can browse http://localhost:9000 [INFO] Database optimization... [INFO] Database optimization done: 172 ms [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESSFUL [INFO] ------------------------------------------------------------------------ [INFO] Total time: 6 minutes 16 seconds [INFO] Finished at: Fri Jun 11 08:28:26 CEST 2010 [INFO] Final Memory: 24M/43M [INFO] ------------------------------------------------------------------------ any idea why, i successfully compile with maven ant plugin java project.

    Read the article

  • General website publishing questions involving domain forwarding issue

    - by Gorgeousyousuf
    Even though I have been having a certain level of knowledge and experience about web development I have never interested in obtaining a domain and publishing a website from my own server. Since today I have been struggling with getting my own domain and configuring it utilizing web sources. I started with learning the outline of web publishing process including web server installation, deploying a website for testing purpose,router port forwarding, getting a domain and forwarding domain to my router which will also forward http requests to my web server I am confused about some parts and so far could not get the web site accessed from outside of the network. All I try to do is just for learning purpose so I do not pay much attention to security issues for now. I have Server 2008 and IIS 7.5 installed. I use a laptop and have access to the modem over wireless and my modem is Zoom x6 5590. Well I will continue explaining what I have done so far and what I think will be after each action I did, I have successfully had access to my website on any local computer entering the internal ip address and port pair of the host machine in a browser. Next, I forwarded port 80 of my host machine creating a virtual server like 10.0.0.x(internal ip(static) of the host) - tcp - start port : 80 - end port : 80 in router options. Now I suppose every request that will come to the public Ip on port 80 will be forwarded to my host machine(10.0.0.x) over port 80. So If everyhing went as desired, the website listening on port 80 will accept the request and process the issue and finally respond bla bla bla... I suppose to access my website from outside of the network by entering http://MyPublicIp:80 in a browser but I couldn't accomplish this task by now despite using godady's domain forwarding tool,I see a small view of my website when I click the "preview" button that checks whether the address(http://publicip/Index.aspx) I entered where my domain will be forwarded is available or not. I am sure that configuring domain does not play a role in solving such a problem since using public ip and port matching does not help. So here is the first question, What is the fact that I face this problem? After that, I have couple of question regarding domain forwarding using godaddy tool. Can I forward my domain to a any port for example port 8080 other than default http port 80? Additionally, can I use a sub-domain to forward to a different port of the host? What I want to design is if the client enters www.mydomain.com, website1 will respond over a specified port and after when a client enters info.mydomain.com, another website which listens on different port will respond. I tried to add a sub-domain and forward it to a address like http://www.mydomain.com:8080/Index.aspx with no success. Can I really do that? Finally, what if I have a ftp site listening on the default port 21 and I create a domain like ftp.mydomain.com that will forward to that ftp site address. Is it possible to use sub-domains for ftp site access? I know I am more than confused but no matter whatever and however you reply to me, you will help me have a more clear view on this subject. Thank you very much from now.

    Read the article

  • Clickonce installation fails after addition of WCF service project

    - by Ant
    So I have a winform solution, deployed via clickonce. Eveything worked fine until i added a WCF project. (see error in parsing the manifest file at end of post) Now I notice that MSBuild compiles the service into a _PublishedWebsites dir. I don't know what the need for this is, but I am suspecting this is the cause of the problem. This wcf project references some other projects within the solution. I am actually hosting the wcf service within the application so I don't really need MSBuild to do all this for me. Any ideas? ===================================================================================== PLATFORM VERSION INFO Windows : 5.1.2600.131072 (Win32NT) Common Language Runtime : 2.0.50727.3603 System.Deployment.dll : 2.0.50727.3053 (netfxsp.050727-3000) mscorwks.dll : 2.0.50727.3603 (GDR.050727-3600) dfdll.dll : 2.0.50727.3053 (netfxsp.050727-3000) dfshim.dll : 2.0.50727.3053 (netfxsp.050727-3000) SOURCES Deployment url : file:///C:/applications/abc/dev/abc.Application.application IDENTITIES Deployment Identity : Flow Management System.app, Version=1.4.0.0, Culture=neutral, PublicKeyToken=8453086392175e0f, processorArchitecture=msil APPLICATION SUMMARY * Installable application. * Trust url parameter is set. ERROR SUMMARY Below is a summary of the errors, details of these errors are listed later in the log. * Activation of C:\applications\abc\dev\abc.Application.application resulted in exception. Following failure messages were detected: + Exception reading manifest from file:///C:/applications/abc/dev/1.4.0.0/abc.Application.exe.manifest: the manifest may not be valid or the file could not be opened. + Parsing and DOM creation of the manifest resulted in error. Following parsing errors were noticed: -HRESULT: 0x80070c81 Start line: 0 Start column: 0 Host file: + Exception from HRESULT: 0x80070C81 COMPONENT STORE TRANSACTION FAILURE SUMMARY No transaction error was detected. WARNINGS There were no warnings during this operation. OPERATION PROGRESS STATUS * [12/03/2010 6:33:53 PM] : Activation of C:\applications\abc\dev\abc.Application.application has started. * [12/03/2010 6:33:53 PM] : Processing of deployment manifest has successfully completed. * [12/03/2010 6:33:53 PM] : Installation of the application has started. ERROR DETAILS Following errors were detected during this operation. * [12/03/2010 6:33:53 PM] System.Deployment.Application.InvalidDeploymentException (ManifestParse) - Exception reading manifest from file:///C:/applications/abc/dev/1.4.0.0/abc.Application.exe.manifest: the manifest may not be valid or the file could not be opened. - Source: System.Deployment - Stack trace: at System.Deployment.Application.ManifestReader.FromDocument(String localPath, ManifestType manifestType, Uri sourceUri) at System.Deployment.Application.DownloadManager.DownloadManifest(Uri& sourceUri, String targetPath, IDownloadNotification notification, DownloadOptions options, ManifestType manifestType, ServerInformation& serverInformation) at System.Deployment.Application.DownloadManager.DownloadApplicationManifest(AssemblyManifest deploymentManifest, String targetDir, Uri deploymentUri, IDownloadNotification notification, DownloadOptions options, Uri& appSourceUri, String& appManifestPath) at System.Deployment.Application.ApplicationActivator.DownloadApplication(SubscriptionState subState, ActivationDescription actDesc, Int64 transactionId, TempDirectory& downloadTemp) at System.Deployment.Application.ApplicationActivator.InstallApplication(SubscriptionState& subState, ActivationDescription actDesc) at System.Deployment.Application.ApplicationActivator.PerformDeploymentActivation(Uri activationUri, Boolean isShortcut, String textualSubId, String deploymentProviderUrlFromExtension, BrowserSettings browserSettings, String& errorPageUrl) at System.Deployment.Application.ApplicationActivator.ActivateDeploymentWorker(Object state) --- Inner Exception --- System.Deployment.Application.InvalidDeploymentException (ManifestParse) - Parsing and DOM creation of the manifest resulted in error. Following parsing errors were noticed: -HRESULT: 0x80070c81 Start line: 0 Start column: 0 Host file: - Source: System.Deployment - Stack trace: at System.Deployment.Application.Manifest.AssemblyManifest.LoadCMSFromStream(Stream stream) at System.Deployment.Application.Manifest.AssemblyManifest..ctor(FileStream fileStream) at System.Deployment.Application.ManifestReader.FromDocument(String localPath, ManifestType manifestType, Uri sourceUri) --- Inner Exception --- System.Runtime.InteropServices.COMException - Exception from HRESULT: 0x80070C81 - Source: System.Deployment - Stack trace: at System.Deployment.Internal.Isolation.IsolationInterop.CreateCMSFromXml(Byte[] buffer, UInt32 bufferSize, IManifestParseErrorCallback Callback, Guid& riid) at System.Deployment.Application.Manifest.AssemblyManifest.LoadCMSFromStream(Stream stream) COMPONENT STORE TRANSACTION DETAILS No transaction information is available.

    Read the article

  • Ajax - How refresh <DIV> after submit

    - by user107712
    Hi, How refresh part of page ("DIV") after my application release a submit? I'm use JQuery with plugin ajaxForm. I set my target with "divResult", but the page repeat your content inside the "divResult". Sources: $(document).ready(function() { $("#formSearch").submit(function() { var options = { target:"#divResult", url: "http://localhost:8081/sniper/estabelecimento/pesquisar.action" } $(this).ajaxSubmit(options); return false; }); }) Page ... ... <div id="divResult" class="quadro_conteudo" > <table id="tableResult" class="tablesorter"> <thead> <tr> <th style="text-align:center;"> <input id="checkTodos" type="checkbox" title="Marca/Desmarcar todos" /> </th> <th scope="col">Name</th> <th scope="col">Phone</th> </tr> </thead> <tbody> <s:iterator value="entityList"> <s:url id="urlEditar" action="editar"><s:param name="id" value="%{id}"/></s:url> <tr> <td style="text-align:center;"><s:checkbox id="checkSelecionado" name="selecionados" theme="simple" fieldValue="%{id}"></s:checkbox></td> <td> <s:a href="%{urlEditar}"><s:property value="name"/></s:a></td> <td> <s:a href="%{urlEditar}"><s:property value="phone"/></s:a></td> </tr> </s:iterator> </tbody> </table> <div id="pager" class="pager"> <form> <img src="<%=request.getContextPath()%>/plugins/jquery/tablesorter/addons/pager/icons/first.png" class="first"/> <img src="<%=request.getContextPath()%>/plugins/jquery/tablesorter/addons/pager/icons/prev.png" class="prev"/> <input type="text" class="pagedisplay"/> <img src="<%=request.getContextPath()%>/plugins/jquery/tablesorter/addons/pager/icons/next.png" class="next"/> <img src="<%=request.getContextPath()%>/plugins/jquery/tablesorter/addons/pager/icons/last.png" class="last"/> <select class="pagesize"> <option selected="selected" value="10">10</option> <option value="20">20</option> <option value="30">30</option> <option value="40">40</option> <option value="<s:property value="totalRegistros"/>">todos</option> </select> <s:label>Total de registros: <s:property value="totalRegistros"/></s:label> </form> </div> <br/> </div> Thanks!!!

    Read the article

  • "Local transaction already has 1 non-XA Resource: cannot add more resources" error

    - by jthg
    After reading previous questions about this error, it seems like all of them conclude that you need to enable XA on all of the data sources. But: What if I don't want a distributed transaction? What would I do if I want to start transactions on two different databases at the same time, but commit the transaction on one database and roll back the transaction on the other? I'm wondering how my code actually initiated a distributed transaction. It looks to me like I'm starting completely separate transactions on each of the databases. Info about the application: The application is an EJB running on a Sun Java Application Server 9.1 I use something like the following spring context to set up the hibernate session factories: <bean id="dbADatasource" class="org.springframework.jndi.JndiObjectFactoryBean"> <property name="jndiName" value="jdbc/dbA"/> </bean> <bean id="dbASessionFactory" class="org.springframework.orm.hibernate3.LocalSessionFactoryBean"> <property name="dataSource" ref="dbADatasource" /> <property name="hibernateProperties"> [hibernate properties...] </property> <property name="mappingResources"> [mapping resources...] </property> </bean> <bean id="dbBDatasource" class="org.springframework.jndi.JndiObjectFactoryBean"> <property name="jndiName" value="jdbc/dbB"/> </bean> <bean id="dbBSessionFactory" class="org.springframework.orm.hibernate3.LocalSessionFactoryBean"> <property name="dataSource" ref="dbBDatasource" /> <property name="hibernateProperties"> [hibernate properties...] </property> <property name="mappingResources"> [mapping resources...] </property> </bean> Both of the JNDI resources are javax.sql.ConnectionPoolDatasoure's. They actually both point to the same connection pool, but we have two different JNDI resources because there's the possibility that the two groups of tables will move to different databases in the future. Then in code, I do: sessionA = dbASessionFactory.openSession(); sessionB = dbBSessionFactory.openSession(); sessionA.beginTransaction(); sessionB.beginTransaction(); The sessionB.beginTransaction() line produces the error in the title of this post - sometimes. I ran the app on two different sun application servers. On one runs it fine, the other throws the error. I don't see any difference in how the two servers are configured although they do connect to different, but equivalent databases. So the question is Why doesn't the above code start completely independent transactions? How can I force it to start independent transactions rather than a distributed transaction? What configuration could cause the difference in behavior between the two application servers? Thanks.

    Read the article

< Previous Page | 799 800 801 802 803 804 805 806 807 808 809 810  | Next Page >