Search Results

Search found 2601 results on 105 pages for 'commit'.

Page 54/105 | < Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >

  • How to "revert" unchanged files with mercurial?

    - by Virgil
    I have installed Windows7 on my computer, and I had to change all permissions/take ownership - which apparently "touched" all my files, and now everything appears to be "modified" (when I do "hg status"). Is there a command I can run so that I will either "commit" or "revert" all the files that have no actual change in them (i.e. text is unchanged, even if file attributes are changed).

    Read the article

  • Publishing an open source project as a public repository and applying a license

    - by Anton
    If I publish my project now, with added license information, will the license still apply to the project if one goes back a few commits in the history to a state where I hadn't yet added any license information? [Relevant answer][1] [1]: http://stackoverflow.com/questions/2468566/correctly-applying-an-open-source-license/2468663#2468663 Relevant answer This suggests that unless there is some license information available, no rights are granted. Is that true in this case too? Or will the license I added in the last commit also apply to older commits?

    Read the article

  • disable the Home Button and Back Button"

    - by michael
    i want way to disable the Home Button & Back Button when click on checkbox in application , my application on version 4.2.2 i have code but not work when click on checkbox work stop to application public void HardButtonOnClick(View v) { boolean checked1 = ((CheckBox) v).isChecked(); if(checked1) { SQLiteDatabase db; db = openOrCreateDatabase("Saftey.db", SQLiteDatabase.CREATE_IF_NECESSARY, null); db.setVersion(1); db.setLocale(Locale.getDefault()); db.setLockingEnabled(true); ContentValues values = new ContentValues(); values.put("hardBtn", "YES"); db.update("Setting", values, "id = ?", new String[] { "1" }); Toast.makeText(this, "Hard Button Locked", Toast.LENGTH_LONG).show(); //SharedPreferences pref = getSharedPreferences("pref",0); //SharedPreferences.Editor edit = pref.edit(); //edit.putString("hard","yes"); //edit.commit(); /* String Lock="yes" ; Bundle bundle = new Bundle(); bundle.putString("key", Lock); Intent a = new Intent(Change_setting.this, ChildMode.class); a.putExtras(bundle); startActivity(a);*/ super.onAttachedToWindow(); this.getWindow().setType(WindowManager.LayoutParams.TYPE_KEYGUARD); isLock = true; } else { SQLiteDatabase db; db = openOrCreateDatabase("Saftey.db", SQLiteDatabase.CREATE_IF_NECESSARY, null); db.setVersion(1); db.setLocale(Locale.getDefault()); db.setLockingEnabled(true); ContentValues values = new ContentValues(); values.put("hardBtn", "NO"); db.update("Setting", values, "id = ?", new String[] { "1" }); //SharedPreferences pref = getSharedPreferences("pref",0); //SharedPreferences.Editor edit = pref.edit(); //edit.putString("hard","no"); //edit.commit(); Toast.makeText(this, "Hard Button Un-Locked", Toast.LENGTH_LONG).show(); isLock = false; } } how can work it??

    Read the article

  • DBTransaction Rollback throws

    - by pdiddy
    Looking into the documentation it says that the Rollback method can throw when the transaction is not in pending state (after begin transaction and before commit transaction). I can't seem to find a way to check whether the transaction can be rollback or not. There isn't a state property either.

    Read the article

  • Download SVN for windows

    - by Gandalf StormCrow
    Every now and then I have a problem with SVN inside eclipse folder gets locked, I have to check out projects few times update and stuff like that . is there a SVN that I can use to commit files directly from console or windows folders?

    Read the article

  • Linux fsck.ext3 says "Device or resource busy" although I did not mount the disk.

    - by matnagel
    I am running an ubuntu 8.04 server instance with a 8GB virtual disk on vmware 1.0.9. For disk maintenance I made a copy of the virtual disk (by making a copy of the 2 vmdk files of sda on the stopped vm on the host) and added it to the original vm. Now this vm has it's original virtual disk sda plus a 1:1 copy (sdd). There are 2 additional disk sdb and sdc which I ignore.) I would expect sdb not to be mounted when I start the vm. So I try tp do a ext2 fsck on sdd from the running vm, but it reports fsck reported that sdb was mounted. $ sudo fsck.ext3 -b 8193 /dev/sdd e2fsck 1.40.8 (13-Mar-2008) fsck.ext3: Device or resource busy while trying to open /dev/sdd Filesystem mounted or opened exclusively by another program? The "mount" command does not tell me sdd is mounted: $ sudo mount /dev/sda1 on / type ext3 (rw,relatime,errors=remount-ro) proc on /proc type proc (rw,noexec,nosuid,nodev) /sys on /sys type sysfs (rw,noexec,nosuid,nodev) varrun on /var/run type tmpfs (rw,noexec,nosuid,nodev,mode=0755) varlock on /var/lock type tmpfs (rw,noexec,nosuid,nodev,mode=1777) udev on /dev type tmpfs (rw,mode=0755) devshm on /dev/shm type tmpfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) /dev/sdc1 on /mnt/r1 type ext3 (rw,relatime,errors=remount-ro) /dev/sdb1 on /mnt/k1 type ext3 (rw,relatime,errors=remount-ro) securityfs on /sys/kernel/security type securityfs (rw) When I ignore the warning and continue the fsck, it reported many errors. How do I get this under control? Is there a better way to figure out if sdd is mounted? Or how is it "busy? How to unmount it then? How to prevent ubuntu from automatically mounting. Or is there something else I am missing? Also from /var/log/syslog I cannot see it is mounted, this is the last part of the startup sequence: kernel: [ 14.229494] ACPI: Power Button (FF) [PWRF] kernel: [ 14.230326] ACPI: AC Adapter [ACAD] (on-line) kernel: [ 14.460136] input: PC Speaker as /devices/platform/pcspkr/input/input3 kernel: [ 14.639366] udev: renamed network interface eth0 to eth1 kernel: [ 14.670187] eth1: link up kernel: [ 16.329607] input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/ kernel: [ 16.367540] parport_pc 00:08: reported by Plug and Play ACPI kernel: [ 16.367670] parport0: PC-style at 0x378, irq 7 [PCSPP,TRISTATE] kernel: [ 19.425637] NET: Registered protocol family 10 kernel: [ 19.437550] lo: Disabled Privacy Extensions kernel: [ 24.328857] loop: module loaded kernel: [ 24.449293] lp0: using parport0 (interrupt-driven). kernel: [ 26.075499] EXT3 FS on sda1, internal journal kernel: [ 28.380299] kjournald starting. Commit interval 5 seconds kernel: [ 28.381706] EXT3 FS on sdc1, internal journal kernel: [ 28.381747] EXT3-fs: mounted filesystem with ordered data mode. kernel: [ 28.444867] kjournald starting. Commit interval 5 seconds kernel: [ 28.445436] EXT3 FS on sdb1, internal journal kernel: [ 28.445444] EXT3-fs: mounted filesystem with ordered data mode. kernel: [ 31.309766] eth1: no IPv6 routers present kernel: [ 35.054268] ip_tables: (C) 2000-2006 Netfilter Core Team mysqld_safe[4367]: started mysqld[4370]: 100124 14:40:21 InnoDB: Started; log sequence number 0 10130914 mysqld[4370]: 100124 14:40:21 [Note] /usr/sbin/mysqld: ready for connections. mysqld[4370]: Version: '5.0.51a-3ubuntu5.4' socket: '/var/run/mysqld/mysqld.sock' port: 3 /etc/mysql/debian-start[4417]: Upgrading MySQL tables if necessary. /etc/mysql/debian-start[4422]: Looking for 'mysql' in: /usr/bin/mysql /etc/mysql/debian-start[4422]: Looking for 'mysqlcheck' in: /usr/bin/mysqlcheck /etc/mysql/debian-start[4422]: This installation of MySQL is already upgraded to 5.0.51a, u /etc/mysql/debian-start[4436]: Checking for insecure root accounts. /etc/mysql/debian-start[4444]: Checking for crashed MySQL tables.

    Read the article

  • How do I manipulate Handler Mappings cleanly in IIS7 using the Microsoft.Web.Administration namespac

    - by Kev
    I asked this over on Stack Overflow but maybe it's something an experienced IIS 7 administrator might know more about, so I'm asking here as well. When manipulating Handler Mappings using the Microsoft.Web.Administration namespace, is there a way to remove the <remove name="handler name"> tag added at the site level. For example, I have a site which inherits all the handler mappings from the global handler mappings configuration. In applicationHost.config the <location> tag initially looks like this: <location path="60030 - testsite-60030.com"> <system.webServer> <security> <authentication> <anonymousAuthentication userName="" /> </authentication> </security> </system.webServer> </location> To remove a handler I use code similar this: string siteName = "60030 - testsite-60030.com"; string handlerToRemove = "ASPClassic"; using(ServerManager sm = new ServerManager()) { Configuration siteConfig = serverManager.GetApplicationHostConfiguration(); ConfigurationSection handlersSection = siteConfig.GetSection("system.webServer/handlers", siteName); ConfigurationElementCollection handlersCollection = handlersSection.GetCollection(); ConfigurationElement handlerElement = handlersCollection .Where(h => h["name"].Equals(handlerMapping.Name)).Single(); handlersCollection.Remove(handlerElement); } The equivalent APPCMD instruction would be: appcmd set config "60030 - autotest-60030.com" -section:system.webServer/handlers /-[name='ASPClassic'] /commit:apphost This results in the site's <location> tag looking like: <location path="60030 - testsite-60030.com"> <system.webServer> <security> <authentication> <anonymousAuthentication userName="" /> </authentication> </security> <handlers> <remove name="ASPClassic" /> </handlers> </system.webServer> </location> So far so good. However if I re-add the ASPClassic handler this results in: <location path="60030 - testsite-60030.com"> <system.webServer> <security> <authentication> <anonymousAuthentication userName="" /> </authentication> </security> <handlers> <!-- Why doesn't <remove> get removed instead of tacking on an <add> directive? --> <remove name="ASPClassic" /> <add name="ASPClassic" path="*.asp" verb="GET,HEAD,POST" modules="IsapiModule" scriptProcessor="%windir%\system32\inetsrv\asp.dll" resourceType="File" /> </handlers> </system.webServer> </location> This happens when using both the Microsoft.Web.Administration namespace and C# or using the following APPCMD command: appcmd set config "60030 - autotest-60030.com" -section:system.webServer/handlers /+[name='ASPClassic',path='*.asp',verb=;'GET,HEAD,POST',modules='IsapiModule',scriptProcessor='%windir%\system32\inetsrv\asp.dll',resourceType='File'] /commit:apphost This can result in a lot of cruft over time for each website that's had a handler removed then re-added programmatically. Is there a way to just remove the <remove name="ASPClassic" /> tag using the Microsoft.Web.Administration namespace code or APPCMD?

    Read the article

  • Debian, 6rd tunnel, and connection troubles

    - by Chris B
    Long story short I am having issues with IPv6 using a 6rd tunnel with my ISP, charter business. They offer a 6rd tunnel that I think I have properly set up, but the server doesn’t reply to every ipv6 request. When the server has the network interfaces idle with no traffic for about 10 minutes, then IPv6 stops accepting inbound connections. to re-allow it, I must go into the server, and make it do a outbound ipv6 connection (normally a ping) to start it back up. Whats weird though i that if I run iptraf when its not working, it still shows a inbound ipv6 packet… the server is just not replying, and I can’t figure out why. Also, if I try to access my server over IPv6 from a house about 1 mile away on the same ISP, it is never able to connect. it always times out, but again the iptraf shows a ipv6 inbound packet. Again, it just does not reply. To test if my server is accessible through IPv6 I always have to use my vzw 4g phone (they use IPv6) or ipv6proxy dot net. Here is all of the configuration information my ISP gives on there tunnel server: 6rd Prefix = 2602:100::/32 Border Relay Address = 68.114.165.1 6rd prefix length = 32 IPv4 mask length = 0 Here is my /etc/network/interfaces for ipv6 (used x's to block real addresses) auto charterv6 iface charterv6 inet6 v4tunnel address 2602:100:189f:xxxx::1 netmask 32 ttl 64 gateway ::68.114.165.1 endpoint 68.114.165.1 local 24.159.218.xxx up ip link set mtu 1280 dev charterv6 here is my iptables config filter :INPUT DROP [0:0] :fail2ban-ssh – [0:0] :OUTPUT ACCEPT [0:0] :FORWARD DROP [0:0] :hold – [0:0] -A INPUT -p tcp -m tcp —dport 22 -j fail2ban-ssh -A INPUT -m state —state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p tcp -m multiport -j ACCEPT —dports 80,443,25,465,110,995,143,993,587,465,22 -A INPUT -i lo -j ACCEPT -A INPUT -p tcp -m tcp —dport 10000 -j ACCEPT -A INPUT -p tcp -m tcp —dport 5900:5910 -j ACCEPT -A fail2ban-ssh -j RETURN -A INPUT -p icmp -j ACCEPT COMMIT and last here is my ip6tables firewall config filter :INPUT DROP [1653:339023] :FORWARD DROP [0:0] :OUTPUT ACCEPT [60141:13757903] :hold – [0:0] -A INPUT -m state —state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p tcp -m multiport —dports 80,443,25,465,110,995,143,993,587,465,22 -j ACCEPT -A INPUT -i lo -j ACCEPT -A INPUT -p tcp -m tcp —dport 10000 -j ACCEPT -A INPUT -p tcp -m tcp —dport 5900:5910 -j ACCEPT -A INPUT -p ipv6-icmp -j ACCEPT COMMIT So Summary: 1.iptraf always shows IPv6 traffic, so its always making it to the server 2.server stops replying on ipv6 after no traffic for awhile (10 minutesish) until a outbound connection is made, then the process repeats. 3.server is NEVER accessable vi same ISP (yet iptraf still shows ipv6 request) Notes: When I try to access it from the same ISP from across town, even with iptables and ip6tables allowing ALL inbound traffic, this is what iptraf shows. IPv6 (92 bytes) from 97.92.18.xxx to 24.159.218.xxx on eth0 ICMP dest unrch (port) (120 bytes) from 24.159.218.xxx to 97.92.18.xxx on eth1 its strange, like its trying to forward to LAN? (eth1 is LAN, eth0 is WAN) even with the IPv6 address being set in the hosts file to the servers domain name. With iptables set up normally with the above configurations it only says this: IPv6 (100 bytes) from 97.92.18.xxx to 24.159.218.xxx on eth0 Im REALLY stuck on this, and any help would be GREATLY appreciated.

    Read the article

  • Hosting a subversion working copy in an remote WebDAV folder

    - by Daniel Baulig
    This might be a bit awkward, but I'll try to explain what I am trying to achieve and what problems I encountered. First of all: whats this about? I am currently trying to set up a distributed working enviroment for developing a web page. My plan was to setup a SVN repository for version control, a live server where the actual live page ist hosted and a development server where I can work on the page. To ease things I intended to not have a local copy of the project on my disk, but to actually work directy on the files, that the development server hosts. For that I setup a WebDAV directory, under devserver.com/workspace, that actually mapped to files served under devserver.com/. So I could connect to devserver.com/workspace, change something and view the results live at devserver.com/. So far this worked perfectly. The next step was to create a SVN repository that would take care of my version control. I intended to be able to checkin to the reposiroty from my development server and at any time, with a small shell script, deploy any revision from the svn to the live server by checking out a copy of the revision into the live server directories. The second part, checking out into the live server, also worked perfectly. The first part though is where problems arose: My workstation is a Windows 7 machine. I connected to the WebDAV share using Windows built-in WebDAV support, which worked quite well. I can create, move, delete, edit, whatever files on my WebDAV share from my Windows machine perfectly. The next step was to checkout a working copy from the SVN (actually hosted at devserver.com/subversion/) into the WebDAV share. In the first try I used the Eclipse plugin subversive. The actual checkout worked fine and I can update and commit stuff to the repository, however, I cannot add any files to the ignore list. It always brings me an error. So I tried the same thing with a complete fresh repository using TortoiseSVN - and again it failed with the same errors. Here is what it says when trying to add files to svnignore: Some of selected resources were not added to ignore. svn: Cannot rename file '\\devserver.com@SSL\DavWWWRoot\workspace\.svn\tmp\dir-props.66fd8936-2701-0010-bb76-472f0b56a5d1.tmp' to '\\devserver.com@SSL\DavWWWRoot\workspace\.svn\tmp\dir-props' This is what apache2 tells me, when I try to add a file to svnignore: [Sun Mar 07 03:54:19 2010] [error] [client xxx.xxx.xxx.xxx] Negotiation: discovered file(s) matching request: /var/www/devserver.com/.svn/tmp/dir-props (None could be negotiated). [Sun Mar 07 03:54:31 2010] [error] [client xxx.xxx.xxx.xxx] (20)Not a directory: The URL contains extraneous path components. The resource could not be identified. [400, #0] Actually both messages are repeated several times. The first one occurs first and is repeated about 5 times and the second comes there after and is repeated propably more than 20 times. If I create a regular file, delete, rename or modify it none of those messages appear in my error.log While writing this question now I was able to add fils to svnignore using TortoiseSVN. However, after that, Eclipse would not let me commit anymore. The error that used to pop up when adding files to svnignore now also shows up while commiting. While searching the web I found some people having this same message appearing because they had files only different in upper- / lower-case naming. I checked my repository and did not find such files. I also read somewhere about people having troubles with WebDAV and file locking, because WebDAV's file locking capabilities seem to be very limited. At some stage I got errors telling me my repository was locked and thus the operations could not be completed. This error though did not appear anymore, since I setup a completely fresh repository and working copy. I would really appreciate any help anyone can provide me in fixing this problem! If there are any more questions feel free to ask. I know this is a somewhat unusual setup. Best regards, Daniel

    Read the article

  • Scenarios for Bazaar and SVN interaction

    - by Adam Badura
    At our company we are using SVN repository. I'm doing programming from both work (main place) and home (mostly experiments and refactoring). Those are two different machines, in different networks and almost never turned on at the same time (after all I'm either at work or at home...) I wanted to give a chance to some distributed version control system and solve some of the issues associated with SVN based process and having two machines. From git, Mercurial and Bazaar I chose to start with Bazaar since it claims that it is designed do be used by human beings. Its my first time with distributed system and having nice and easy user interface was important for me. Features I wanted to achieve were: Being able to update from SVN repository and commit to it. Being able to commit locally steps of my work on a task. Being able to have few separate tasks at the same time in their own local branches. Being able to share those branches between my work and home computer. As a means of transport between work and home computer I wanted to use a pen-drive. Company server will not work since I may not instal there anything. Neither will work a web service repository as I may not upload source code to web (especially if it would be public which seems to be a common case in free web services). This transport should be Bazaar-based (or what ever else I will end with) so it can be done more or less automatically but manual copying and pasting some folders or generating patch files (providing they would work - I have bad experience with patch files in SVN) would work as well if there is no better solution. Yet the pen-drive should only be used for transportation. I do not want to edit or build there. I tried following Bazaar guidelines for integration with SVN. But I failed. I tried both bzr svn-import and bzr checkout providing URL from my repository as both https://... and svn+https://.... In some cases it had some issues with certificates but the output specified argument to ignore them so I did that. Sometimes it asked me to log in (in other cases maybe it remembered... I don't know) which I did. All were running very slow (this could be our server issue) and at some point were interrupted due to connection interruption (this almost for sure is our server issue: it truncates the connection after some time). But since (as opposed to SVN) restarting starts a new rather than from point where it was interrupted I was unable to reach all the ~19000 revisions (ending usually somewhere around 150). What and how should I do with Bazaar? Is is possible to somehow import SVN repository from the local checkout (so that I do not suffer the connection truncation)? I was told that a colleague that used to work with us has done something similar (importing SVN repository with full history) with Mercurial like in no time. So I'm seriously considering now trying Mercurial even if only to see if that will work. But also what are your general guidelines to achieve the listed features?

    Read the article

  • How can I make subversion reset the stored passwords/users and remember my authentication credential

    - by NicDumZ
    Hello folks! Background: I used to have everything working just fine on my fresh install: $ svn co https://domain:443/ test1 Error validating server certificate for 'https://domain:443': - The certificate is not issued by a trusted authority. Use the fingerprint to validate the certificate manually! Certificate information: - Hostname: **REMOVED** - Valid: **REMOVED** - Issuer: **REMOVED** - Fingerprint: **checked with issuer and REMOVED** (R)eject, accept (t)emporarily or accept (p)ermanently? p Authentication realm: <https://domain:443> Subversion repository Password for 'nicdumz-machine-hostname': Authentication realm: <https://domain:443> Subversion repository Username: nicdumz Password for 'nicdumz': # proceeds to checkout correctly $ svn co https://domain:443/ test2 # checkouts nicely, without asking for my password. At some point I needed to commit stuff using a different account. So I did that $ svn ci --username other.user Authentication realm: <https://domain:443> Subversion repository Password for 'other.user': # works fine But since then, everytime I want to commit as 'nicdumz' (default user, all repos have been checked-out with that user), it prompts me for my password: $ svn ci Authentication realm: <https://domain:443> Subversion repository Password for 'nicdumz': Hey come on, why :) The same happens if I want a fresh checkout, since read-access is also protected. So I tried fixing the issue by myself. I read around that ~/.subversion/auth was storing credentials, so I removed it from the way: $ cd ~/.subversion $ mv auth oldauth $ mkdir auth It seemed to work at first, because svn had forgotten about certificate validation: $ svn co https://domain:443/ test3 Error validating server certificate for 'https://domain:443': - The certificate is not issued by a trusted authority. Use the fingerprint to validate the certificate manually! Certificate information: - Hostname: **REMOVED** - Valid: **REMOVED** - Issuer: **REMOVED** - Fingerprint: **checked with issuer and REMOVED** (R)eject, accept (t)emporarily or accept (p)ermanently? p Authentication realm: <https://domain:443> Subversion repository Password for 'nicdumz-machine-hostname': Authentication realm: <https://domain:443> Subversion repository Username: nicdumz Password for 'nicdumz': # proceeds to checkout correctly $ svn up Authentication realm: <https://domain:443> Subversion repository Password for 'nicdumz': What? how is this happening? If you have suggestions to investigate more about the behaviour, I am very interested. If I'm correct, there is no way to do a verbose svn up or anything of the like, so I'm not sure should I go for investigation. Oh, and for what it's worth: $ svn --version svn, version 1.6.6 (r40053) compiled Oct 26 2009, 06:19:08 Copyright (C) 2000-2009 CollabNet. Subversion is open source software, see http://subversion.tigris.org/ This product includes software developed by CollabNet (http://www.Collab.Net/). The following repository access (RA) modules are available: * ra_neon : Module for accessing a repository via WebDAV protocol using Neon. - handles 'http' scheme - handles 'https' scheme * ra_svn : Module for accessing a repository using the svn network protocol. - with Cyrus SASL authentication - handles 'svn' scheme * ra_local : Module for accessing a repository on local disk. - handles 'file' scheme * ra_serf : Module for accessing a repository via WebDAV protocol using serf. - handles 'http' scheme - handles 'https' scheme

    Read the article

  • 17 new features in Visual Studio 2010

    - by vik20000in
    Visual studio 2010 has been released to RTM a few days back. This release of Visual studio 2010 comes with a big number of improvements on many fronts. In this post I will try and point out some of the major improvements in Visual Studio 2010. 1)      Visual studio IDE Improvement. Visual studio IDE has been rewritten in WPF. The look and feel of the studio has been improved for improved readability. Start page has been redesigned and template so that anyone can change the start page as they wish. 2)      Multiple Monitor - Support for Multiple Monitor was already there in Visual studio. But in this edition it has been improved as much that we can now place the document, design and code window outside the IDE in another monitor. 3)      ZOOM in Code Editor – Making the editors in WPF has made significant improvement for them. The best one that I like is the ZOOM feature. We can now zoom in the code editor with the help of the ctrl + Mouse scroll. The zoom feature does not work on the Design surface or windows with icon like solution view and toolbox. 4)      Box Selection - Another Important improvement in the Visual studio 2010 is the box selection. We can select a rectangular by holding down the Alt Key and selecting with mouse.  Now in the rectangular selection we can insert text, Paste same code in different line etc. This is helpful if you want to convert a number of variables from public to private etc… 5)      New Improved Search – One of the best productivity improvements in Visual studio 2010 is its new search as you type support. This has been done in the Navigate To window which can be brought up by pressing (Ctrl + ,). The navigate To windows also take help of the Camel casing and will be able to search with the help of camel casing when character is entered in upper case. For example we can search AOH for AddOrederHeader. 6)      Call Hierarchy – This feature is only available to the Visual C# and Visual C++ editor. The call hierarchy windows displays the calls made to and from (yes both to and from) a selected method property or a constructor. The call hierarchy also shows the implementation of interface and the overrides of virtual or abstract methods. This window is very helpful in understanding the code flow, and evaluating the effect of making changes. The best part is it is available at design time and not at runtime only like a debugger. 7)      Highlighting references – One of the very cool stuff in Visual Studio 2010 is the fact if you select a variable then all the use of that variable will be highlighted alongside. This should work for all the result of symbols returned by Find all reference. This also works for Name of class, objects variable, properties and methods. We can also use the Ctrl + Shift + Down Arrow or Up Arror to move through them. 8)      Generate from usage - The Generate from usage feature lets you use classes and members before you define them. You can generate a stub for any undefined class, constructor, method, property, field, or enum that you want to use but have not yet defined. You can generate new types and members without leaving your current location in code, This minimizes interruption to your workflow.9)      IntelliSense Suggestion Mode - IntelliSense now provides two alternatives for IntelliSense statement completion, completion mode and suggestion mode. Use suggestion mode for situations where classes and members are used before they are defined. In suggestion mode, when you type in the editor and then commit the entry, the text you typed is inserted into the code. When you commit an entry in completion mode, the editor shows the entry that is highlighted on the members list. When an IntelliSense window is open, you can press CTRL+ALT+SPACEBAR to toggle between completion mode and suggestion mode. 10)   Application Lifecycle Management – A client application for management of application lifecycle like version control, work item tracking, build automation, team portal etc is available for free (this is not available for express edition.). 11)   Start Page – The start page has been redesigned with WPF for new functionality and look. Tabbed areas are provided for content from different source including MSDN. Once you open some project the start page closes automatically. The list of recent project also lets you remove project from the list. And above all the start page is customizable enough to be changed as per individual requirement. 12)   Extension Manager – Visual Studio 2010 has provided good ways to be extended. We can also use MEF to extend most of the features of Visual Studio. The new extension manager now can go the visual studio gallery and install the extension without even opening any explorer. 13)   Code snippets – Visual studio 2010 for HTML, Jscript and Asp.net also. 14)   Improved Intelligence for JavaScript has been improved vastly (around 2-5 times). Intelligence now also shows the XML documentation comment on the go. 15)   Web Deployment – Web Deployment has been vastly improved. We can package and publish the web application in one click. Three major supported deployment scenarios are Web packages, one click deployment and Web configuration Transformation. 16)   SharePoint - Visual Studio 2010 also brings vastly improved development experience for SharePoint. We can create, edit, debug, package, deploy and activate SharePoint project from within Visual Studio. Deployment of Site is as easy as hitting F5. 17)   Azure – Visual Studio 2010 also comes with handy improvement for developing on windows Azure environment. Vikram

    Read the article

  • How do I set up MVP for a Winforms solution?

    - by JonWillis
    Question moved from Stackoverflow - http://stackoverflow.com/questions/4971048/how-do-i-set-up-mvp-for-a-winforms-solution I have used MVP and MVC in the past, and I prefer MVP as it controls the flow of execution so much better in my opinion. I have created my infrastructure (datastore/repository classes) and use them without issue when hard coding sample data, so now I am moving onto the GUI and preparing my MVP. Section A I have seen MVP using the view as the entry point, that is in the views constructor method it creates the presenter, which in turn creates the model, wiring up events as needed. I have also seen the presenter as the entry point, where a view, model and presenter are created, this presenter is then given a view and model object in its constructor to wire up the events. As in 2, but the model is not passed to the presenter. Instead the model is a static class where methods are called and responses returned directly. Section B In terms of keeping the view and model in sync I have seen. Whenever a value in the view in changed, i.e. TextChanged event in .Net/C#. This fires a DataChangedEvent which is passed through into the model, to keep it in sync at all times. And where the model changes, i.e. a background event it listens to, then the view is updated via the same idea of raising a DataChangedEvent. When a user wants to commit changes a SaveEvent it fires, passing through into the model to make the save. In this case the model mimics the view's data and processes actions. Similar to #b1, however the view does not sync with the model all the time. Instead when the user wants to commit changes, SaveEvent is fired and the presenter grabs the latest details and passes them into the model. in this case the model does not know about the views data until it is required to act upon it, in which case it is passed all the needed details. Section C Displaying of business objects in the view, i.e. a object (MyClass) not primitive data (int, double) The view has property fields for all its data that it will display as domain/business objects. Such as view.Animals exposes a IEnumerable<IAnimal> property, even though the view processes these into Nodes in a TreeView. Then for the selected animal it would expose SelectedAnimal as IAnimal property. The view has no knowledge of domain objects, it exposes property for primitive/framework (.Net/Java) included objects types only. In this instance the presenter will pass an adapter object the domain object, the adapter will then translate a given business object into the controls visible on the view. In this instance the adapter must have access to the actual controls on the view, not just any view so becomes more tightly coupled. Section D Multiple views used to create a single control. i.e. You have a complex view with a simple model like saving objects of different types. You could have a menu system at the side with each click on an item the appropriate controls are shown. You create one huge view, that contains all of the individual controls which are exposed via the views interface. You have several views. You have one view for the menu and a blank panel. This view creates the other views required but does not display them (visible = false), this view also implements the interface for each view it contains (i.e. child views) so it can expose to one presenter. The blank panel is filled with other views (Controls.Add(myview)) and ((myview.visible = true). The events raised in these "child"-views are handled by the parent view which in turn pass the event to the presenter, and visa versa for supplying events back down to child elements. Each view, be it the main parent or smaller child views are each wired into there own presenter and model. You can literately just drop a view control into an existing form and it will have the functionality ready, just needs wiring into a presenter behind the scenes. Section E Should everything have an interface, now based on how the MVP is done in the above examples will affect this answer as they might not be cross-compatible. Everything has an interface, the View, Presenter and Model. Each of these then obviously has a concrete implementation. Even if you only have one concrete view, model and presenter. The View and Model have an interface. This allows the views and models to differ. The presenter creates/is given view and model objects and it just serves to pass messages between them. Only the View has an interface. The Model has static methods and is not created, thus no need for an interface. If you want a different model, the presenter calls a different set of static class methods. Being static the Model has no link to the presenter. Personal thoughts From all the different variations I have presented (most I have probably used in some form) of which I am sure there are more. I prefer A3 as keeping business logic reusable outside just MVP, B2 for less data duplication and less events being fired. C1 for not adding in another class, sure it puts a small amount of non unit testable logic into a view (how a domain object is visualised) but this could be code reviewed, or simply viewed in the application. If the logic was complex I would agree to an adapter class but not in all cases. For section D, i feel D1 creates a view that is too big atleast for a menu example. I have used D2 and D3 before. Problem with D2 is you end up having to write lots of code to route events to and from the presenter to the correct child view, and its not drag/drop compatible, each new control needs more wiring in to support the single presenter. D3 is my prefered choice but adds in yet more classes as presenters and models to deal with the view, even if the view happens to be very simple or has no need to be reused. i think a mixture of D2 and D3 is best based on circumstances. As to section E, I think everything having an interface could be overkill I already do it for domain/business objects and often see no advantage in the "design" by doing so, but it does help in mocking objects in tests. Personally I would see E2 as a classic solution, although have seen E3 used in 2 projects I have worked on previously. Question Am I implementing MVP correctly? Is there a right way of going about it? I've read Martin Fowler's work that has variations, and I remember when I first started doing MVC, I understood the concept, but could not originally work out where is the entry point, everything has its own function but what controls and creates the original set of MVC objects.

    Read the article

  • GoldenGate 12c Trail Encryption and Credentials with Oracle Wallet

    - by hamsun
    I have been asked more than once whether the Oracle Wallet supports GoldenGate trail encryption. Although GoldenGate has supported encryption with the ENCKEYS file for years, Oracle GoldenGate 12c now also supports encryption using the Oracle Wallet. This helps improve security and makes it easier to administer. Two types of wallets can be configured in Oracle GoldenGate 12c: The wallet that holds the master keys, used with trail or TCP/IP encryption and decryption, stored in the new 12c dirwlt/cwallet.sso file.   The wallet that holds the Oracle Database user IDs and passwords stored in the ‘credential store’ stored in the new 12c dircrd/cwallet.sso file.   A wallet can be created using a ‘create wallet’  command.  Adding a master key to an existing wallet is easy using ‘open wallet’ and ‘add masterkey’ commands.   GGSCI (EDLVC3R27P0) 42> open wallet Opened wallet at location 'dirwlt'. GGSCI (EDLVC3R27P0) 43> add masterkey Master key 'OGG_DEFAULT_MASTERKEY' added to wallet at location 'dirwlt'.   Existing GUI Wallet utilities that come with other products such as the Oracle Database “Oracle Wallet Manager” do not work on this version of the wallet. The default Oracle Wallet can be changed.   GGSCI (EDLVC3R27P0) 44> sh ls -ltr ./dirwlt/* -rw-r----- 1 oracle oinstall 685 May 30 05:24 ./dirwlt/cwallet.sso GGSCI (EDLVC3R27P0) 45> info masterkey Masterkey Name:                 OGG_DEFAULT_MASTERKEY Creation Date:                  Fri May 30 05:24:04 2014 Version:        Creation Date:                  Status: 1               Fri May 30 05:24:04 2014        Current   The second wallet file is used for the credential used to connect to a database, without exposing the user id or password. Once it is configured, this file can be copied so that credentials are available to connect to the source or target database.   GGSCI (EDLVC3R27P0) 48> sh cp ./dircrd/cwallet.sso $GG_EURO_HOME/dircrd GGSCI (EDLVC3R27P0) 49> sh ls -ltr ./dircrd/* -rw-r----- 1 oracle oinstall 709 May 28 05:39 ./dircrd/cwallet.sso   The encryption wallet file can also be copied to the target machine so the replicat has access to the master key to decrypt records that are encrypted in the trail. Similar to the old ENCKEYS file, the master keys wallet created on the source host must either be stored in a centrally available disk or copied to all GoldenGate target hosts. The wallet is in a platform-independent format, although it is not certified for the iSeries, z/OS, and NonStop platforms.   GGSCI (EDLVC3R27P0) 50> sh cp ./dirwlt/cwallet.sso $GG_EURO_HOME/dirwlt   The new 12c UserIdAlias parameter is used to locate the credential in the wallet so the source user id and password does not need to be stored as a parameter as long as it is in the wallet.   GGSCI (EDLVC3R27P0) 52> view param extwest extract extwest exttrail ./dirdat/ew useridalias gguamer table west.*; The EncryptTrail parameter is used to encrypt the trail using the Advanced Encryption Standard and can be used with a primary extract or pump extract. GGSCI (EDLVC3R27P0) 54> view param pwest extract pwest encrypttrail AES256 rmthost easthost, mgrport 15001 rmttrail ./dirdat/pe passthru table west.*;   Once the extracts are running, records can be encrypted using the wallet.   GGSCI (EDLVC3R27P0) 60> info extract *west EXTRACT    EXTWEST   Last Started 2014-05-30 05:26   Status RUNNING Checkpoint Lag       00:00:17 (updated 00:00:01 ago) Process ID           24982 Log Read Checkpoint  Oracle Integrated Redo Logs                      2014-05-30 05:25:53                      SCN 0.0 (0) EXTRACT    PWEST     Last Started 2014-05-30 05:26   Status RUNNING Checkpoint Lag       24:02:32 (updated 00:00:05 ago) Process ID           24983 Log Read Checkpoint  File ./dirdat/ew000004                      2014-05-29 05:23:34.748949  RBA 1483   The ‘info masterkey’ command is used to confirm the wallet contains the key after copying it to the target machine. The key is needed to decrypt the data in the trail before the replicat applies the changes to the target database.   GGSCI (EDLVC3R27P0) 41> open wallet Opened wallet at location 'dirwlt'. GGSCI (EDLVC3R27P0) 42> info masterkey Masterkey Name:                 OGG_DEFAULT_MASTERKEY Creation Date:                  Fri May 30 05:24:04 2014 Version:        Creation Date:                  Status: 1               Fri May 30 05:24:04 2014        Current   Once the replicat is running, records can be decrypted using the wallet.   GGSCI (EDLVC3R27P0) 44> info reast REPLICAT   REAST     Last Started 2014-05-30 05:28   Status RUNNING INTEGRATED Checkpoint Lag       00:00:00 (updated 00:00:02 ago) Process ID           25057 Log Read Checkpoint  File ./dirdat/pe000004                      2014-05-30 05:28:16.000000  RBA 1546   There is no need for the DecryptTrail parameter when using the Oracle Wallet, unlike when using the ENCKEYS file.   GGSCI (EDLVC3R27P0) 45> view params reast replicat reast assumetargetdefs discardfile ./dirrpt/reast.dsc, purge useridalias ggueuro map west.*, target east.*;   Once a record is inserted into the source table and committed, the encryption can be verified using logdump and then querying the target table.   AMER_SQL>insert into west.branch values (50, 80071); 1 row created.   AMER_SQL>commit; Commit complete.   The following encrypted record can be found using logdump. Logdump 40 >n 2014/05/30 05:28:30.001.154 Insert               Len    28 RBA 1546 Name: WEST.BRANCH After  Image:                                             Partition 4   G  s    0a3e 1ba3 d924 5c02 eade db3f 61a9 164d 8b53 4331 | .>...$\....?a..M.SC1   554f e65a 5185 0257                               | UO.ZQ..W  Bad compressed block, found length of  7075 (x1ba3), RBA 1546   GGS tokens: TokenID x52 'R' ORAROWID         Info x00  Length   20  4141 4157 7649 4141 4741 4141 4144 7541 4170 0001 | AAAWvIAAGAAAADuAAp..  TokenID x4c 'L' LOGCSN           Info x00  Length    7  3231 3632 3934 33                                 | 2162943  TokenID x36 '6' TRANID           Info x00  Length   10  3130 2e31 372e 3135 3031                          | 10.17.1501  The replicat automatically decrypted this record from the trail and then inserted the row to the target table using the wallet. This select verifies the row was inserted into the target database and the data is not encrypted. EURO_SQL>select * from branch where branch_number=50; BRANCH_NUMBER                  BRANCH_ZIP -------------                                   ----------    50                                              80071   Book a seat in an upcoming Oracle GoldenGate 12c: Fundamentals for Oracle course now to learn more about GoldenGate 12c new features including how to use GoldenGate with the Oracle wallet, credentials, integrated extracts, integrated replicats, the Oracle Universal Installer, and other new features. Looking for another course? View all Oracle GoldenGate training.   Randy Richeson joined Oracle University as a Senior Principal Instructor in March 2005. He is an Oracle Certified Professional (10g-12c) and a GoldenGate Certified Implementation Specialist (10-11g). He has taught GoldenGate since 2010 and also has experience teaching other technical curriculums including GoldenGate Monitor, Veridata, JD Edwards, PeopleSoft, and the Oracle Application Server.

    Read the article

  • Lessons from rewriting POP Forums for MVC, open source-like

    - by Jeff
    It has been a ton of work, interrupted over the last two years by unemployment, moving, a baby, failing to sell houses and other life events, but it's really exciting to see POP Forums v9 coming together. I'm not even sure when I decided to really commit to it as an open source project, but working on the same team as the CodePlex folks probably had something to do with it. Moving along the roadmap I set for myself, the app is now running on a quasi-production site... we launched MouseZoom last weekend. (That's a post-beta 1 build of the forum. There's also some nifty Silverlight DeepZoom goodness on that site.)I have to make a point to illustrate just how important starting over was for me. I started this forum thing for my sites in old ASP more than ten years ago. What a mess that stuff was, including SQL injection vulnerabilities and all kinds of crap. It went to ASP.NET in 2002, but even then, it felt a little too much like script. More than a year later, in 2003, I did an honest to goodness rewrite. If you've been in this business of writing code for any amount of time, you know how much you hate what you wrote a month ago, so just imagine that with seven years in between. The subsequent versions still carried a fair amount of crap, and that's why I had to start over, to make a clean break. Mind you, much of that crap is still running on some of my production sites in a stable manner, but it's a pain in the ass to maintain.So with that clean break, there is much that I have learned. These are a few of those lessons, in no particular order...Avoid shiny object syndromeOver the years, I've embraced new things without bothering to ask myself why. I remember spending the better part of a year trying to adapt this app to use the membership and profile API's in ASP.NET, just because they were there. They didn't solve any known problem. Early on in this version, I dabbled in exotic ORM's, even though I already had the fundamental SQL that I knew worked. I bloated up the client side code with all kinds of jQuery UI and plugins just because, and it got in the way. All the new shiny can be distracting, and I've come to realize that I've allowed it to be a distraction most of my professional life.Just query what you needI've spent a lot of time over-thinking how to query data. In the SQL world, this means exotic joins, special caches, the read-update-commit loop of ORM's, etc. There are times when you have to remind yourself that you aren't Facebook, you'll never be Facebook, and that databases are in fact intended to serve data. In a lot of projects, back in the day, I used to have these big, rich data objects and pass them all over the place, through various application tiers, when in reality, all I needed was some ID from the entity. I try to be mindful of how many queries hit the database on a given request, but I don't obsess over it. I just get what I need.Don't spend too much time worrying about your unit testsIf you've looked at any of the tests for POP Forums, you might offer an audible WTF. That's OK. There's a whole lot of mocking going on. In some cases, it points out where you're doing too much, and that's good for improving your design. In other cases it shows where your design sucks. But the biggest trap of unit testing is that you worry it should be prettier. That's a waste of time. When you write a test, in many cases before the production code, the important part is that you're testing the right thing. If you have to mock up a bunch of stuff to test the outcome, so be it, but it's not wasted time. You're still doing up the typical arrange-action-assert deal, and you'll be able to read that later if you need to.Get back to your HTTP rootsASP.NET Webforms did a reasonably decent job at abstracting us away from the stateless nature of the Web. A lot of people criticize it, but I think it all worked pretty well. These days, with MVC, jQuery, REST services, and what not, we've gone back to thinking about the wire. The nuts and bolts passing between our Web browser and server matters. This doesn't make things harder, in my opinion, it makes them easier. There is something incredibly freeing about how we approach development of Web apps now. HTTP is a really simple protocol, and the stuff we push through it, in particular HTML and JSON, are pretty simple too. The debugging points are really easy to trap and trace.Premature optimization is prematureI'll go back to the data thing for a moment. I've been known to look at a particular action or use case and stress about the number of calls that are made to the database. I'm not suggesting that it's a bad thing to keep these in mind, but if you worry about it outside of the context of the actual impact, you're wasting time. For example, I query the database for last read times in a forum separately of the user and the list of forums. The impact on performance barely exists. If I put it under load, exceeding the kind of load I expect, it still barely has an impact. Then consider it only counts for logged in users. The context of this "inefficient" action is that it doesn't matter. Did I mention I won't be Facebook?Solve your own problems firstThis is another trap I've fallen into. I've often thought about what other people might need for some feature or aspect of the app. In other words, I was willing to make design decisions based on non-existent data. How stupid is that? When I decided to truly open source this thing, building for myself first was a stated design goal. This app has to server the audiences of CoasterBuzz, MouseZoom and other sites first. In this development scenario, you don't have access to mountains of usability studies or user focus groups. You have to start with what you know.I'm sure there are other points I could make too. It has been a lot of fun to work on, and I look forward to evolving the UI as time goes on. That's where I hope to see more magic in the future.

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #048

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Order of Result Set of SELECT Statement on Clustered Indexed Table When ORDER BY is Not Used Above theory is true in most of the cases. However SQL Server does not use that logic when returning the resultset. SQL Server always returns the resultset which it can return fastest.In most of the cases the resultset which can be returned fastest is the resultset which is returned using clustered index. Effect of TRANSACTION on Local Variable – After ROLLBACK and After COMMIT One of the Jr. Developer asked me this question (What will be the Effect of TRANSACTION on Local Variable – After ROLLBACK and After COMMIT?) while I was rushing to an important meeting. I was getting late so I asked him to talk with his Application Tech Lead. When I came back from meeting both of them were looking for me. They said they are confused. I quickly wrote down following example for them. 2008 SQL SERVER – Guidelines and Coding Standards Complete List Download Coding standards and guidelines are very important for any developer on the path of a successful career. A coding standard is a set of guidelines, rules and regulations on how to write code. Coding standards should be flexible enough or should take care of the situation where they should not prevent best practices for coding. They are basically the guidelines that one should follow for better understanding. Download Guidelines and Coding Standards complete List Download Get Answer in Float When Dividing of Two Integer Many times we have requirements of some calculations amongst different fields in Tables. One of the software developers here was trying to calculate some fields having integer values and divide it which gave incorrect results in integer where accurate results including decimals was expected. Puzzle – Computed Columns Datatype Explanation SQL Server automatically does a cast to the data type having the highest precedence. So the result of INT and INT will be INT, but INT and FLOAT will be FLOAT because FLOAT has a higher precedence. If you want a different data type, you need to do an EXPLICIT cast. Renaming SP is Not Good Idea – Renaming Stored Procedure Does Not Update sys.procedures I have written many articles about renaming a tables, columns and procedures SQL SERVER – How to Rename a Column Name or Table Name, here I found something interesting about renaming the stored procedures and felt like sharing it with you all. The interesting fact is that when we rename a stored procedure using SP_Rename command, the Stored Procedure is successfully renamed. But when we try to test the procedure using sp_helptext, the procedure will be having the old name instead of new names. 2009 Insert Values of Stored Procedure in Table – Use Table Valued Function It is clear from the result set that , where I have converted stored procedure logic into the table valued function, is much better in terms of logic as it saves a large number of operations. However, this option should be used carefully. The performance of the stored procedure is “usually” better than that of functions. Interesting Observation – Index on Index View Used in Similar Query Recently, I was working on an optimization project for one of the largest organizations. While working on one of the queries, we came across a very interesting observation. We found that there was a query on the base table and when the query was run, it used the index, which did not exist in the base table. On careful examination, we found that the query was using the index that was on another view. This was very interesting as I have personally never experienced a scenario like this. In simple words, “Query on the base table can use the index created on the indexed view of the same base table.” Interesting Observation – Execution Plan and Results of Aggregate Concatenation Queries Working with SQL Server has never seemed to be monotonous – no matter how long one has worked with it. Quite often, I come across some excellent comments that I feel like acknowledging them as blog posts. Recently, I wrote an article on SQL SERVER – Execution Plan and Results of Aggregate Concatenation Queries Depend Upon Expression Location, which is well received in the community. 2010 I encourage all of you to go through complete series and write your own on the subject. If you write an article and send it to me, I will publish it on this blog with due credit to you. If you write on your own blog, I will update this blog post pointing to your blog post. SQL SERVER – ORDER BY Does Not Work – Limitation of the View 1 SQL SERVER – Adding Column is Expensive by Joining Table Outside View – Limitation of the View 2 SQL SERVER – Index Created on View not Used Often – Limitation of the View 3 SQL SERVER – SELECT * and Adding Column Issue in View – Limitation of the View 4 SQL SERVER – COUNT(*) Not Allowed but COUNT_BIG(*) Allowed – Limitation of the View 5 SQL SERVER – UNION Not Allowed but OR Allowed in Index View – Limitation of the View 6 SQL SERVER – Cross Database Queries Not Allowed in Indexed View – Limitation of the View 7 SQL SERVER – Outer Join Not Allowed in Indexed Views – Limitation of the View 8 SQL SERVER – SELF JOIN Not Allowed in Indexed View – Limitation of the View 9 SQL SERVER – Keywords View Definition Must Not Contain for Indexed View – Limitation of the View 10 SQL SERVER – View Over the View Not Possible with Index View – Limitations of the View 11 2011 Startup Parameters Easy to Configure If you are a regular reader of this blog, you must be aware that I have written about SQL Server Denali recently. Here is the quickest way to reach into the screen where we can change the startup parameters. Go to SQL Server Configuration Manager >> SQL Server Services >> Right Click on the Server >> Properties >> Startup Parameters 2012 Validating Unique Columnname Across Whole Database I sometimes come across very strange requirements and often I do not receive a proper explanation of the same. Here is the one of those examples. For example “Our business requirement is when we add new column we want it unique across current database.” Read the solution to this strange request in this blog post. Excel Losing Decimal Values When Value Pasted from SSMS ResultSet It is very common when users are coping the resultset to Excel, the floating point or decimals are missed. The solution is very much simple and it requires a small adjustment in the Excel. By default Excel is very smart and when it detects the value which is getting pasted is numeric it changes the column format to accommodate that. Basic Calculation and PEMDAS Order of Operation Read this interesting blog post for fantastic conversation about the subject. Copy Column Headers from Resultset – SQL in Sixty Seconds #027 – Video http://www.youtube.com/watch?v=x_-3tLqTRv0 Delete From Multiple Table – Update Multiple Table in Single Statement There are two questions which I get every single day multiple times. In my gmail, I have created standard canned reply for them. Let us see the questions here. I want to delete from multiple table in a single statement how will I do it? I want to update multiple table in a single statement how will I do it? Read the answer in the blog post. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • At most how many customized P3 attributes could be added into Agile?

    - by Jie Chen
    I have one customer/Oracle Partner Consultant asking me such question: how many customized attributes can be allowed to add to Agile's subclass Page Three? I never did research against this because Agile User Guide never says this and theoretically Agile supports unlimited amount of customized attributes, unless the browser itself cannot handle them in allocated memory. However my customers says when to add almost 1000 attributes, the browser (Web Client) will not show any Page Three attributes, including all the out-of-box attributes. Let's see why. Analysis It is horrible to add 1000 attributes manually. Let's do it by a batch SQL like below to add them to Item's subclass Page Three tab. Do not execute below SQL because it will not take effect due to your different node id. CREATE OR REPLACE PROCEDURE createP3Text(v_name IN VARCHAR2) IS v_nid NUMBER; v_pid NUMBER; BEGIN select SEQNODETABLE.nextval into v_nid from dual; Insert Into nodeTable ( id,parentID,description,objType,inherit,helpID,version,name ) values ( v_nid,2473003, v_name ,1,0,0,0, v_name); Insert Into propertyTable ( ID,parentID,readOnly,attType,dataType,selection,visible,propertyID,value ) values ( SEQPROPERTYTABLE.nextval,v_nid,0,2,1,0,1,925, null); Insert Into propertyTable ( ID,parentID,readOnly,attType,dataType,selection,visible,propertyID,value ) values ( SEQPROPERTYTABLE.nextval,v_nid,1,0,0,0,0,1,'0'); Insert Into propertyTable ( ID,parentID,readOnly,attType,dataType,selection,visible,propertyID,value ) values ( SEQPROPERTYTABLE.nextval,v_nid,1,0,0,0,0,2,'0'); Insert Into propertyTable ( ID,parentID,readOnly,attType,dataType,selection,visible,propertyID,value ) values ( SEQPROPERTYTABLE.nextval,v_nid,1,2,2,0,1,3,'50'); Insert Into propertyTable ( ID,parentID,readOnly,attType,dataType,selection,visible,propertyID,value ) values ( SEQPROPERTYTABLE.nextval,v_nid,0,2,1,0,1,5, null); Insert Into propertyTable ( ID,parentID,readOnly,attType,dataType,selection,visible,propertyID,value ) values ( SEQPROPERTYTABLE.nextval,v_nid,0,2,2,0,1,6,'50'); Insert Into propertyTable ( ID,parentID,readOnly,attType,dataType,selection,visible,propertyID,value ) values ( SEQPROPERTYTABLE.nextval,v_nid,0,2,2,0,0,7,'0'); Insert Into propertyTable ( ID,parentID,readOnly,attType,dataType,selection,visible,propertyID,value ) values ( SEQPROPERTYTABLE.nextval,v_nid,0,4,1,451,1,8,'0'); Insert Into propertyTable ( ID,parentID,readOnly,attType,dataType,selection,visible,propertyID,value ) values ( SEQPROPERTYTABLE.nextval,v_nid,0,4,1,451,1,9,'1'); Insert Into propertyTable ( ID,parentID,readOnly,attType,dataType,selection,visible,propertyID,value ) values ( SEQPROPERTYTABLE.nextval,v_nid,1,2,1,0,1,10,v_name); Insert Into propertyTable ( ID,parentID,readOnly,attType,dataType,selection,visible,propertyID,value ) values ( SEQPROPERTYTABLE.nextval,v_nid,1,0,0,0,0,11,'0'); Insert Into propertyTable ( ID,parentID,readOnly,attType,dataType,selection,visible,propertyID,value ) values ( SEQPROPERTYTABLE.nextval,v_nid,1,4,1,11743,1,14,'2'); Insert Into propertyTable ( ID,parentID,readOnly,attType,dataType,selection,visible,propertyID,value ) values ( SEQPROPERTYTABLE.nextval,v_nid,0,2,1,0,1,30, null); Insert Into propertyTable ( ID,parentID,readOnly,attType,dataType,selection,visible,propertyID,value ) values ( SEQPROPERTYTABLE.nextval,v_nid,0,2,1,0,1,38, null); Insert Into propertyTable ( ID,parentID,readOnly,attType,dataType,selection,visible,propertyID,value ) values ( SEQPROPERTYTABLE.nextval,v_nid,1,4,1,451,0,59,'1'); Insert Into propertyTable ( ID,parentID,readOnly,attType,dataType,selection,visible,propertyID,value ) values ( SEQPROPERTYTABLE.nextval,v_nid,1,4,1,451,0,60,'1'); Insert Into propertyTable ( ID,parentID,readOnly,attType,dataType,selection,visible,propertyID,value ) values ( SEQPROPERTYTABLE.nextval,v_nid,1,4,1,724,0,61, null); Insert Into propertyTable ( ID,parentID,readOnly,attType,dataType,selection,visible,propertyID,value ) values ( SEQPROPERTYTABLE.nextval,v_nid,1,2,1,0,0,232,'0'); Insert Into propertyTable ( ID,parentID,readOnly,attType,dataType,selection,visible,propertyID,value ) values ( SEQPROPERTYTABLE.nextval,v_nid,1,4,1,451,0,233,'1'); Insert Into propertyTable ( ID,parentID,readOnly,attType,dataType,selection,visible,propertyID,value ) values ( SEQPROPERTYTABLE.nextval,v_nid,0,4,1,12239,1,415,'13307'); Insert Into propertyTable ( ID,parentID,readOnly,attType,dataType,selection,visible,propertyID,value ) values ( SEQPROPERTYTABLE.nextval,v_nid,1,2,1,0,0,605,'0'); Insert Into propertyTable ( ID,parentID,readOnly,attType,dataType,selection,visible,propertyID,value ) values ( SEQPROPERTYTABLE.nextval,v_nid,0,4,1,451,1,610,'0'); Insert Into propertyTable ( ID,parentID,readOnly,attType,dataType,selection,visible,propertyID,value ) values ( SEQPROPERTYTABLE.nextval,v_nid,1,4,1,451,0,716,'1'); Insert Into propertyTable ( ID,parentID,readOnly,attType,dataType,selection,visible,propertyID,value ) values ( SEQPROPERTYTABLE.nextval,v_nid,0,4,1,451,1,795,'0'); Insert Into propertyTable ( ID,parentID,readOnly,attType,dataType,selection,visible,propertyID,value ) values ( SEQPROPERTYTABLE.nextval,v_nid,0,4,1,2000008821,1,864,'2'); Insert Into propertyTable ( ID,parentID,readOnly,attType,dataType,selection,visible,propertyID,value ) values ( SEQPROPERTYTABLE.nextval,v_nid,0,4,1,451,1,923,'0'); Insert Into propertyTable ( ID,parentID,readOnly,attType,dataType,selection,visible,propertyID,value ) values ( SEQPROPERTYTABLE.nextval,v_nid,0,4,1,451,0,719,'0'); Insert Into tableInfo ( tabID,tableID,classID,att,ordering ) values ( 2473005,1501,2473002,v_nid,9999); commit; END createP3Text; / BEGIN FOR i in 1..1000 LOOP createP3Text('MyText' || i); END LOOP; END; / DROP PROCEDURE createP3Text; COMMIT; Now restart Agile Server and check the Server's log, we noticed below: ***** Node Created : 85625 ***** Property Created : 184579 +++++++++++++++++++++++++++++++++++++ + Agile PLM Server Starting Up... + +++++++++++++++++++++++++++++++++++++ However the previously log before batch SQL is ***** Node Created : 84625 ***** Property Created : 157579 +++++++++++++++++++++++++++++++++++++ + Agile PLM Server Starting Up... + +++++++++++++++++++++++++++++++++++++ Obviously we successfully imported 1000 (85625-84625) attributes. Now go to JavaClient and confirm if we have them or not. Theoretically we are able to open such item object and see all these 1000 attributes and their values, but we get below error. We have no error tips in server log. But never mind we have the Java Console for JavaClient. If to open the same item in JavaClient we get a clear error and detailed trace in Java Console. ORA-01795: maximum number of expressions in a list is 1000 java.sql.SQLException: ORA-01795: maximum number of expressions in a list is 1000 at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:125) ... ... at weblogic.jdbc.wrapper.PreparedStatement.executeQuery(PreparedStatement.java:128) at com.agile.pc.cmserver.base.AgileFlexUtil.setFlexValuesForOneRowTable(AgileFlexUtil.java:1104) at com.agile.pc.cmserver.base.BaseFlexTableDAO.loadExtraFlexAttValues(BaseFlexTableDAO.java:111) at com.agile.pc.cmserver.base.BasePageThreeDAO.loadTable(BasePageThreeDAO.java:108) If you are interested in the background of the problem, you may de-compile the class com.agile.pc.cmserver.base.AgileFlexUtil.setFlexValuesForOneRowTable and find the root cause that Agile happens to hit Oracle Database's limitation that more than 1000 values in the "IN" clause. Check here http://ora-01795.ora-code.com If you need Oracle Agile's final solution, please contact Oracle Agile Support. Performance Below two screenshot are jvm heap usage from before-SQL and after-SQL. We can see there is no big memory gap between two cases. So definitely there is no performance impact to Agile Application Server unless you have more than 1000 attributes for EACH of your dozens of  subclasses. And for client, 1000 attributes should not impact the browser's performance because in HTML we only use dt and dd for each attribute's pair: label and value. It is quite lightweight.

    Read the article

  • D2K to OA Framework Transition

    - by PRajkumar
    What is the difference between D2K form and OA Framework? It is a very innocent but important question for someone that desires to make transition from D2K to OA Framework. I hope you have already read and implemented OA Framework Getting Started. I will re-visit my own experience of implementing HelloWorld program in "OA Framework". When I implemented HelloWorld a year ago, I had no clue as to what I was doing & why I was doing those steps. I merely copied the steps from Oracle Tutorial without understanding them. Hence in this blog, I will try to explain in simple manner the meaning of OA Framework HelloWorld Program and compare the steps to D2K form [where possible]. To keep things simple, only basics will be discussed. Following key Steps were needed for HelloWorld Step 1 Create a new Workspace and a new Project as dictated by Oracle's tutorial. When defining project, you will specify a default package, which in this case was oracle.apps.ak.hello This means the following: - ak is the short name of the Application in Oracle           [means fnd_applications.short_name] hello is the name of your project Step 2 Next, you will create a OA Page within hello project Think OA Page as the fmx file itself in D2K. I am saying so because this page gets attached to the form function. This page will be created within hello project, hence the package name oracle.apps.ak.hello.webui Note the webui, it is a convention to have page in webui, means this page represents the Web User Interface You will assign the default AM [OAApplicationModule]. Think of AM "Connection Manager" and "Transaction State Manager" for your page          I can't co-relate this to anything in D2k, as there is no concept of Connection Pooling and that D2k is not stateless. Reason being that as soon as you kick off a D2K Form, it connects to a single session of Oracle and sticks to that single Oracle database session. So is not the case in OAF, hence AM is needed. Step 3 You create Region within the Page. ·         Region is what will store your fields. Text input fields will be of type messageTextInput. Think of Canvas in D2K. You can have nested regions. Stacked Canvas in D2K comes the closest to this component of OA Framework Step 4 Add a button to one of the nested regions The itemStyle should be submitButton, in case you want the page to be submitted when this button is clicked There is no WHEN-BUTTON-PRESSED trigger in OAF. In Framework, you will add a controller java code to handle events like Form Submit button clicks. JDeveloper generates the default code for you. Primarily two functions [should I call methods] will be created processRequest [for UI Rendering Handling] and processFormRequest          Think of processRequest as WHEN-NEW-FORM-INSTANCE, though processRequest is very restrictive. Note What is the difference between processRequest and processFormRequest? These two methods are available in the Default Controller class that gets created. processFormRequest This method is commonly used to react/respond to the event that has taken place, for example click of a button. Some examples are if(oapagecontext.getParameter("Cancel") != null) (Do your processing for Cancellation/ Rollback) if(oapagecontext.getParameter("Submit") != null) (Do your validations and commit here) if(oapagecontext.getParameter("Update") != null) (Do your validations and commit here) In the above three examples, you could be calling oapagecontext.forwardImmediately to re-direct the page navigation to some other page if needed. processRequest In this method, usually page rendering related code is written. Effectively, each GUI component is a bean that gets initialised during processRequest. Those who are familiar with D2K forms, something like pre-query may be written in this method. Step 5 In the controller to access the value in field "HelloName" the command is String userContent = pageContext.getParameter("HelloName"); In D2k, we used :block.field. In OAFramework, at submission of page, all the field values get passed into to OAPageContext object. Use getParameter to access the field value To set the value of the field, use OAMessageTextInputBean field HelloName = (OAMessageTextInputBean)webBean.findChildRecursive("HelloName"); fieldHelloName.setText(pageContext,"Setting the default value" ); Note when setting field value in controller: Note 1. Do not set the value in processFormRequest Note 2. If the field comes from View Object, then do not use setText in controller Note 3. For control fields [that are not based on View Objects], you can use setText to assign values in processRequest method Lets take some notes to expand beyond the HelloWorld Project Note 1 In D2K-forms we sort of created a Window, attached to Canvas, and then fields within that Canvas. However in OA Framework, think of Page being fmx/Window, think of Region being a Canvas, and fields being within Regions. This is not a formal/accurate understanding of analogy between D2k and Framework, but is close to being logical. Note 2 In D2k, your Forms fmb file was compiled to fmx. It was fmx file that was deployed on mid-tier. In case of OAF, your OA Page is nothing but a XML file. We call this MDS [meta data]. Whatever name you give to "Page" in OAF, an XML file of the same name gets created. This xml file must then be loaded into database by using XML Importer command. Note 3 Apart from MDS XML file, almost everything else is merely deployed to your mid-tier. Usually this is underneath $JAVA_TOP/oracle/apps/../.. All java files will go underneath java top/oracle/apps/../.. etc. Note 4 When building tutorial, ignore the steps for setting "Attribute Sets". These are not mandatory. Oracle might just have developed their tutorials without including these. Think of these like Visual Attributes of D2K forms Note 5 Controller is where you will write any java code in OA Framework. You can create a Controller per Page or have a different Controller for each of the Regions with the same Page. Note 6 In the method processFormRequest of the Controller, you can access the values of the page by using notation pageContext.getParameter("<fieldname here>"). This method processFormRequest is executed when the OAF Screen/Page is submitted by click of a button. Note 7 Inside the controller, all the Database Related interactions for example interaction with View Objects happen via Application Module. But why so? Because Application Module Manages the transaction state of the Application. OAApplicationModuleImpl oaapplicationmoduleimpl = OAApplicationModuleImpl)oapagecontext.getApplicationModule(oawebbean); OADBTransaction oadbtransaction = OADBTransaction)oaapplicationmoduleimpl.getDBTransaction(); Note 8 In D2K, we have control block or a block based on database view. Similarly, in OA Framework, if the field does not have view Object attached, then it is like a control field. Hence in HelloWorld example, field HelloName is a control field [in D2K terminology]. A view Object can either be based on a view/table, synonym or on a SQL statement. Note 9 I wish to access the fields in multi record block that is based on view Object. Can I do this in Controller? Sure you can. To traverse through those records, do the below ·         Get the reference to the View Object using (OAViewObject)oapagecontext.getApplicationModule(oawebbean).findViewObject("VO Name Here") ·         Loop through the records in View Objects using count returned from oaviewobject.getFetchedRowCount() ·         For each record, fetch the value of the fields within the loop as oracle.jbo.Row row = oaviewobject.getRowAtRangeIndex(loop index here); (String)row.getAttribute("Column name of VO here ");

    Read the article

  • FreeBSD performance tuning. Sysctls, loader.conf, kernel

    - by SaveTheRbtz
    I wanted to share knowledge of tuning FreeBSD via sysctl.conf/loader.conf/KENCONF. It was initially based on Igor Sysoev's (author of nginx) presentation about FreeBSD tuning up to 100,000-200,000 active connections. Tunings are for FreeBSD-CURRENT. Since 7.2 amd64 some of them are tuned well by default. Prior 7.0 some of them are boot only (set via /boot/loader.conf) or does not exist at all. sysctl.conf: # No zero mapping feature # May break wine # (There are also reports about broken samba3) #security.bsd.map_at_zero=0 # If you have really busy webserver with apache13 you may run out of processes #kern.maxproc=10000 # Same for servers with apache2 / Pound #kern.threads.max_threads_per_proc=4096 # Max. backlog size kern.ipc.somaxconn=4096 # Shared memory // 7.2+ can use shared memory > 2Gb kern.ipc.shmmax=2147483648 # Sockets kern.ipc.maxsockets=204800 # Can cause this on older kernels: # http://old.nabble.com/Significant-performance-regression-for-increased-maxsockbuf-on-8.0-RELEASE-tt26745981.html#a26745981 ) kern.ipc.maxsockbuf=10485760 # Mbuf 2k clusters (on amd64 7.2+ 25600 is default) # For such high value vm.kmem_size must be increased to 3G kern.ipc.nmbclusters=262144 # Jumbo pagesize(_SC_PAGESIZE) clusters # Used as general packet storage for jumbo frames # can be monitored via `netstat -m` #kern.ipc.nmbjumbop=262144 # Jumbo 9k/16k clusters # If you are using them #kern.ipc.nmbjumbo9=65536 #kern.ipc.nmbjumbo16=32768 # For lower latency you can decrease scheduler's maximum time slice # default: stathz/10 (~ 13) #kern.sched.slice=1 # Increase max command-line length showed in `ps` (e.g for Tomcat/Java) # Default is PAGE_SIZE / 16 or 256 on x86 # This avoids commands to be presented as [executable] in `ps` # For more info see: http://www.freebsd.org/cgi/query-pr.cgi?pr=120749 kern.ps_arg_cache_limit=4096 # Every socket is a file, so increase them kern.maxfiles=204800 kern.maxfilesperproc=200000 kern.maxvnodes=200000 # On some systems HPET is almost 2 times faster than default ACPI-fast # Useful on systems with lots of clock_gettime / gettimeofday calls # See http://old.nabble.com/ACPI-fast-default-timecounter,-but-HPET-83--faster-td23248172.html # After revision 222222 HPET became default: http://svnweb.freebsd.org/base?view=revision&revision=222222 kern.timecounter.hardware=HPET # Small receive space, only usable on http-server, on file server this # should be increased to 65535 or even more #net.inet.tcp.recvspace=8192 # This is useful on Fat-Long-Pipes #net.inet.tcp.recvbuf_max=10485760 #net.inet.tcp.recvbuf_inc=65535 # Small send space is useful for http servers that serve small files # Autotuned since 7.x net.inet.tcp.sendspace=16384 # This is useful on Fat-Long-Pipes #net.inet.tcp.sendbuf_max=10485760 #net.inet.tcp.sendbuf_inc=65535 # Turn off receive autotuning # You can play with it. #net.inet.tcp.recvbuf_auto=0 #net.inet.tcp.sendbuf_auto=0 # This should be enabled if you going to use big spaces (>64k) # Also timestamp field is useful when using syncookies net.inet.tcp.rfc1323=1 # Turn this off on high-speed, lossless connections (LAN 1Gbit+) # If you set it there is no need in TCP_NODELAY sockopt (see man tcp) net.inet.tcp.delayed_ack=0 # This feature is useful if you are serving data over modems, Gigabit Ethernet, # or even high speed WAN links (or any other link with a high bandwidth delay product), # especially if you are also using window scaling or have configured a large send window. # Automatically disables on small RTT ( http://www.freebsd.org/cgi/cvsweb.cgi/src/sys/netinet/tcp_subr.c?#rev1.237 ) # This sysctl was removed in 10-CURRENT: # See: http://www.mail-archive.com/[email protected]/msg06178.html #net.inet.tcp.inflight.enable=0 # TCP slowstart algorithm tunings # We assuming we have very fast clients #net.inet.tcp.slowstart_flightsize=100 #net.inet.tcp.local_slowstart_flightsize=100 # Disable randomizing of ports to avoid false RST # Before usage check SA here www.bsdcan.org/2006/papers/ImprovingTCPIP.pdf # (it's also says that port randomization auto-disables at some conn.rates, but I didn't checked it thou) #net.inet.ip.portrange.randomized=0 # Increase portrange # For outgoing connections only. Good for seed-boxes and ftp servers. net.inet.ip.portrange.first=1024 net.inet.ip.portrange.last=65535 # # stops route cache degregation during a high-bandwidth flood # http://www.freebsd.org/doc/en/books/handbook/securing-freebsd.html #net.inet.ip.rtexpire=2 net.inet.ip.rtminexpire=2 net.inet.ip.rtmaxcache=1024 # Security net.inet.ip.redirect=0 net.inet.ip.sourceroute=0 net.inet.ip.accept_sourceroute=0 net.inet.icmp.maskrepl=0 net.inet.icmp.log_redirect=0 net.inet.icmp.drop_redirect=1 net.inet.tcp.drop_synfin=1 # # There is also good example of sysctl.conf with comments: # http://www.thern.org/projects/sysctl.conf # # icmp may NOT rst, helpful for those pesky spoofed # icmp/udp floods that end up taking up your outgoing # bandwidth/ifqueue due to all that outgoing RST traffic. # #net.inet.tcp.icmp_may_rst=0 # Security net.inet.udp.blackhole=1 net.inet.tcp.blackhole=2 # IPv6 Security # For more info see http://www.fosslc.org/drupal/content/security-implications-ipv6 # Disable Node info replies # To see this vulnerability in action run `ping6 -a sglAac ::1` or `ping6 -w ::1` on unprotected node net.inet6.icmp6.nodeinfo=0 # Turn on IPv6 privacy extensions # For more info see proposal http://unix.derkeiler.com/Mailing-Lists/FreeBSD/net/2008-06/msg00103.html net.inet6.ip6.use_tempaddr=1 net.inet6.ip6.prefer_tempaddr=1 # Disable ICMP redirect net.inet6.icmp6.rediraccept=0 # Disable acceptation of RA and auto linklocal generation if you don't use them #net.inet6.ip6.accept_rtadv=0 #net.inet6.ip6.auto_linklocal=0 # Increases default TTL, sometimes useful # Default is 64 net.inet.ip.ttl=128 # Lessen max segment life to conserve resources # ACK waiting time in miliseconds # (default: 30000. RFC from 1979 recommends 120000) net.inet.tcp.msl=5000 # Max bumber of timewait sockets net.inet.tcp.maxtcptw=200000 # Don't use tw on local connections # As of 15 Apr 2009. Igor Sysoev says that nolocaltimewait has some buggy realization. # So disable it or now till get fixed #net.inet.tcp.nolocaltimewait=1 # FIN_WAIT_2 state fast recycle net.inet.tcp.fast_finwait2_recycle=1 # Time before tcp keepalive probe is sent # default is 2 hours (7200000) #net.inet.tcp.keepidle=60000 # Should be increased until net.inet.ip.intr_queue_drops is zero net.inet.ip.intr_queue_maxlen=4096 # Interrupt handling via multiple CPU, but with context switch. # You can play with it. Default is 1; #net.isr.direct=0 # This is for routers only #net.inet.ip.forwarding=1 #net.inet.ip.fastforwarding=1 # This speed ups dummynet when channel isn't saturated net.inet.ip.dummynet.io_fast=1 # Increase dummynet(4) hash #net.inet.ip.dummynet.hash_size=2048 #net.inet.ip.dummynet.max_chain_len # Should be increased when you have A LOT of files on server # (Increase until vfs.ufs.dirhash_mem becomes lower) vfs.ufs.dirhash_maxmem=67108864 # Note from commit http://svn.freebsd.org/base/head@211031 : # For systems with RAID volumes and/or virtualization envirnments, where # read performance is very important, increasing this sysctl tunable to 32 # or even more will demonstratively yield additional performance benefits. vfs.read_max=32 # Explicit Congestion Notification (see http://en.wikipedia.org/wiki/Explicit_Congestion_Notification) net.inet.tcp.ecn.enable=1 # Flowtable - flow caching mechanism # Useful for routers #net.inet.flowtable.enable=1 #net.inet.flowtable.nmbflows=65535 # Extreme polling tuning #kern.polling.burst_max=1000 #kern.polling.each_burst=1000 #kern.polling.reg_frac=100 #kern.polling.user_frac=1 #kern.polling.idle_poll=0 # IPFW dynamic rules and timeouts tuning # Increase dyn_buckets till net.inet.ip.fw.curr_dyn_buckets is lower net.inet.ip.fw.dyn_buckets=65536 net.inet.ip.fw.dyn_max=65536 net.inet.ip.fw.dyn_ack_lifetime=120 net.inet.ip.fw.dyn_syn_lifetime=10 net.inet.ip.fw.dyn_fin_lifetime=2 net.inet.ip.fw.dyn_short_lifetime=10 # Make packets pass firewall only once when using dummynet # i.e. packets going thru pipe are passing out from firewall with accept #net.inet.ip.fw.one_pass=1 # shm_use_phys Wires all shared pages, making them unswappable # Use this to lessen Virtual Memory Manager's work when using Shared Mem. # Useful for databases #kern.ipc.shm_use_phys=1 # ZFS # Enable prefetch. Useful for sequential load type i.e fileserver. # FreeBSD sets vfs.zfs.prefetch_disable to 1 on any i386 systems and # on any amd64 systems with less than 4GB of avaiable memory # For additional info check this nabble thread http://old.nabble.com/Samba-read-speed-performance-tuning-td27964534.html #vfs.zfs.prefetch_disable=0 # On highload servers you may notice following message in dmesg: # "Approaching the limit on PV entries, consider increasing either the # vm.pmap.shpgperproc or the vm.pmap.pv_entry_max tunable" vm.pmap.shpgperproc=2048 loader.conf: # Accept filters for data, http and DNS requests # Useful when your software uses select() instead of kevent/kqueue or when you under DDoS # DNS accf available on 8.0+ accf_data_load="YES" accf_http_load="YES" accf_dns_load="YES" # Async IO system calls aio_load="YES" # Linux specific devices in /dev # As for 8.1 it only /dev/full #lindev_load="YES" # Adds NCQ support in FreeBSD # WARNING! all ad[0-9]+ devices will be renamed to ada[0-9]+ # 8.0+ only #ahci_load="YES" #siis_load="YES" # FreeBSD 8.2+ # New Congestion Control for FreeBSD # http://caia.swin.edu.au/urp/newtcp/tools/cc_chd-readme-0.1.txt # http://www.ietf.org/proceedings/78/slides/iccrg-5.pdf # Initial merge commit message http://www.mail-archive.com/[email protected]/msg31410.html #cc_chd_load="YES" # Increase kernel memory size to 3G. # # Use ONLY if you have KVA_PAGES in kernel configuration, and you have more than 3G RAM # Otherwise panic will happen on next reboot! # # It's required for high buffer sizes: kern.ipc.nmbjumbop, kern.ipc.nmbclusters, etc # Useful on highload stateful firewalls, proxies or ZFS fileservers # (FreeBSD 7.2+ amd64 users: Check that current value is lower!) #vm.kmem_size="3G" # If your server has lots of swap (>4Gb) you should increase following value # according to http://lists.freebsd.org/pipermail/freebsd-hackers/2009-October/029616.html # Otherwise you'll be getting errors # "kernel: swap zone exhausted, increase kern.maxswzone" # kern.maxswzone="256M" # Older versions of FreeBSD can't tune maxfiles on the fly #kern.maxfiles="200000" # Useful for databases # Sets maximum data size to 1G # (FreeBSD 7.2+ amd64 users: Check that current value is lower!) #kern.maxdsiz="1G" # Maximum buffer size(vfs.maxbufspace) # You can check current one via vfs.bufspace # Should be lowered/upped depending on server's load-type # Usually decreased to preserve kmem # (default is 10% of mem) #kern.maxbcache="512M" # Sendfile buffers # For i386 only #kern.ipc.nsfbufs=10240 # FreeBSD 9+ # HPET "legacy route" support. It should allow HPET to work per-CPU # See http://www.mail-archive.com/[email protected]/msg03603.html #hint.atrtc.0.clock=0 #hint.attimer.0.clock=0 #hint.hpet.0.legacy_route=1 # syncache Hash table tuning net.inet.tcp.syncache.hashsize=1024 net.inet.tcp.syncache.bucketlimit=512 net.inet.tcp.syncache.cachelimit=65536 # Increased hostcache # Later host cache can be viewed via net.inet.tcp.hostcache.list hidden sysctl # Very useful for it's RTT RTTVAR # Must be power of two net.inet.tcp.hostcache.hashsize=65536 # hashsize * bucketlimit (which is 30 by default) # It allocates 255Mb (1966080*136) of RAM net.inet.tcp.hostcache.cachelimit=1966080 # TCP control-block Hash table tuning net.inet.tcp.tcbhashsize=4096 # Disable ipfw deny all # Should be uncommented when there is a chance that # kernel and ipfw binary may be out-of sync on next reboot #net.inet.ip.fw.default_to_accept=1 # # SIFTR (Statistical Information For TCP Research) is a kernel module that # logs a range of statistics on active TCP connections to a log file. # See prerelease notes http://groups.google.com/group/mailing.freebsd.current/browse_thread/thread/b4c18be6cdce76e4 # and man 4 sitfr #siftr_load="YES" # Enable superpages, for 7.2+ only # Also read http://lists.freebsd.org/pipermail/freebsd-hackers/2009-November/030094.html vm.pmap.pg_ps_enabled=1 # Usefull if you are using Intel-Gigabit NIC #hw.em.rxd=4096 #hw.em.txd=4096 #hw.em.rx_process_limit="-1" # Also if you have ALOT interrupts on NIC - play with following parameters # NOTE: You should set them for every NIC #dev.em.0.rx_int_delay: 250 #dev.em.0.tx_int_delay: 250 #dev.em.0.rx_abs_int_delay: 250 #dev.em.0.tx_abs_int_delay: 250 # There is also multithreaded version of em/igb drivers can be found here: # http://people.yandex-team.ru/~wawa/ # # for additional em monitoring and statistics use # sysctl dev.em.0.stats=1 ; dmesg # sysctl dev.em.0.debug=1 ; dmesg # Also after r209242 (-CURRENT) there is a separate sysctl for each stat variable; # Same tunings for igb #hw.igb.rxd=4096 #hw.igb.txd=4096 #hw.igb.rx_process_limit=100 # Some useful netisr tunables. See sysctl net.isr #net.isr.maxthreads=4 #net.isr.defaultqlimit=4096 #net.isr.maxqlimit: 10240 # Bind netisr threads to CPUs #net.isr.bindthreads=1 # # FreeBSD 9.x+ # Increase interface send queue length # See commit message http://svn.freebsd.org/viewvc/base?view=revision&revision=207554 #net.link.ifqmaxlen=1024 # Nicer boot logo =) loader_logo="beastie" And finally here is KERNCONF: # Just some of them, see also # cat /sys/{i386,amd64,}/conf/NOTES # This one useful only on i386 #options KVA_PAGES=512 # You can play with HZ in environments with high interrupt rate (default is 1000) # 100 is for my notebook to prolong it's battery life #options HZ=100 # Polling is goot on network loads with high packet rates and low-end NICs # NB! Do not enable it if you want more than one netisr thread #options DEVICE_POLLING # Eliminate datacopy on socket read-write # To take advantage with zero copy sockets you should have an MTU >= 4k # This req. is only for receiving data. # Read more in man zero_copy_sockets # Also this epic thread on kernel trap: # http://kerneltrap.org/node/6506 # Here Linus says that "anybody that does it that way (FreeBSD) is totally incompetent" #options ZERO_COPY_SOCKETS # Support TCP sign. Used for IPSec options TCP_SIGNATURE # There was stackoverflow found in KAME IPSec stack: # See http://secunia.com/advisories/43995/ # For quick workaround you can use `ipfw add deny proto ipcomp` options IPSEC # This ones can be loaded as modules. They described in loader.conf section #options ACCEPT_FILTER_DATA #options ACCEPT_FILTER_HTTP # Adding ipfw, also can be loaded as modules options IPFIREWALL # On 8.1+ you can disable verbose to see blocked packets on ipfw0 interface. # Also there is no point in compiling verbose into the kernel, because # now there is net.inet.ip.fw.verbose tunable. #options IPFIREWALL_VERBOSE #options IPFIREWALL_VERBOSE_LIMIT=10 options IPFIREWALL_FORWARD # Adding kernel NAT options IPFIREWALL_NAT options LIBALIAS # Traffic shaping options DUMMYNET # Divert, i.e. for userspace NAT options IPDIVERT # This is for OpenBSD's pf firewall device pf device pflog # pf's QoS - ALTQ options ALTQ options ALTQ_CBQ # Class Bases Queuing (CBQ) options ALTQ_RED # Random Early Detection (RED) options ALTQ_RIO # RED In/Out options ALTQ_HFSC # Hierarchical Packet Scheduler (HFSC) options ALTQ_PRIQ # Priority Queuing (PRIQ) options ALTQ_NOPCC # Required for SMP build # Pretty console # Manual can be found here http://forums.freebsd.org/showthread.php?t=6134 #options VESA #options SC_PIXEL_MODE # Disable reboot on Ctrl Alt Del #options SC_DISABLE_REBOOT # Change normal|kernel messages color options SC_NORM_ATTR=(FG_GREEN|BG_BLACK) options SC_KERNEL_CONS_ATTR=(FG_YELLOW|BG_BLACK) # More scroll space options SC_HISTORY_SIZE=8192 # Adding hardware crypto device device crypto device cryptodev # Useful network interfaces device vlan device tap #Virtual Ethernet driver device gre #IP over IP tunneling device if_bridge #Bridge interface device pfsync #synchronization interface for PF device carp #Common Address Redundancy Protocol device enc #IPsec interface device lagg #Link aggregation interface device stf #IPv4-IPv6 port # Also for my notebook, but may be used with Opteron device amdtemp # Same for Intel processors device coretemp # man 4 cpuctl device cpuctl # CPU control pseudo-device # Support for ECMP. More than one route for destination # Works even with default route so one can use it as LB for two ISP # For now code is unstable and panics (panic: rtfree 2) on route deletions. #options RADIX_MPATH # Multicast routing #options MROUTING #options PIM # Debug & DTrace options KDB # Kernel debugger related code options KDB_TRACE # Print a stack trace for a panic options KDTRACE_FRAME # amd64-only(?) options KDTRACE_HOOKS # all architectures - enable general DTrace hooks #options DDB #options DDB_CTF # all architectures - kernel ELF linker loads CTF data # Adaptive spining in lockmgr (8.x+) # See http://www.mail-archive.com/[email protected]/msg10782.html options ADAPTIVE_LOCKMGRS # UTF-8 in console (8.x+) #options TEKEN_UTF8 # FreeBSD 8.1+ # Deadlock resolver thread # For additional information see http://www.mail-archive.com/[email protected]/msg18124.html # (FYI: "resolution" is panic so use with caution) #options DEADLKRES # Increase maximum size of Raw I/O and sendfile(2) readahead #options MAXPHYS=(1024*1024) #options MAXBSIZE=(1024*1024) # For scheduler debug enable following option. # Debug will be available via `kern.sched.stats` sysctl # For more information see http://svnweb.freebsd.org/base/head/sys/conf/NOTES?view=markup #options SCHED_STATS If you are tuning network for maximum performance you may wish to play with ifconfig options like: # You can list all capabilities via `ifconfig -m` ifconfig [-]rxcsum [-]txcsum [-]tso [-]lro mtu In case you've enabled DDB in kernel config, you should edit your /etc/ddb.conf and add something like this to enable automatic reboot (and textdump as bonus): script kdb.enter.panic=textdump set; capture on; show pcpu; bt; ps; alltrace; capture off; call doadump; reset script kdb.enter.default=textdump set; capture on; bt; ps; capture off; call doadump; reset And do not forget to add ddb_enable="YES" to /etc/rc.conf Since FreeBSD 9 you can select to enable/disable flowcontrol on your NIC: # See http://en.wikipedia.org/wiki/Ethernet_flow_control and # http://www.mail-archive.com/[email protected]/msg07927.html for additional info ifconfig bge0 media auto mediaopt flowcontrol PS. Also most of FreeBSD's limits can be monitored by # vmstat -z and # limits PPS. variety of network counters can be monitored via # netstat -s In FreeBSD-9 netstat's -Q option appeared, try following command to display netisr stats # netstat -Q PPPS. also see # man 7 tuning PPPPS. I wanted to thank FreeBSD community, especially author of nginx - Igor Sysoev, nginx-ru@ and FreeBSD-performance@ mailing lists for providing useful information about FreeBSD tuning. FreeBSD WIP * Whats cooking for FreeBSD 7? * Whats cooking for FreeBSD 8? * Whats cooking for FreeBSD 9? So here is the question: What tunings are you using on yours FreeBSD servers? You can also post your /etc/sysctl.conf, /boot/loader.conf, kernel options, etc with description of its' meaning (do not copy-paste from sysctl -d). Don't forget to specify server type (web, smb, gateway, etc) Let's share experience!

    Read the article

  • Hibernate + PostgreSQL : relation does not exist - SQL Error: 0, SQLState: 42P01

    - by tommy599
    Hello, I am having some problems trying to work with PostgreSQL and Hibernate, more specifically, the issue mentioned in the title. I've been searching the net for a few hours now but none of the found solutions worked for me. I am using Eclipse Java EE IDE for Web Developers. Build id: 20090920-1017 with HibernateTools, Hibernate 3, PostgreSQL 8.4.3 on Ubuntu 9.10. Here are the relevant files: Message.class package hello; public class Message { private Long id; private String text; public Message() { } public Long getId() { return id; } public void setId(Long id) { this.id = id; } public String getText() { return text; } public void setText(String text) { this.text = text; } } Message.hbm.xml <?xml version="1.0"?> <!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate/Hibernate Mapping DTD 3.0//EN" "http://hibernate.sourceforge.net/hibernate-mapping-3.0.dtd"> <hibernate-mapping package="hello"> <class name="Message" table="public.messages"> <id name="id" column="id"> <generator class="assigned"/> </id> <property name="text" column="messagetext"/> </class> </hibernate-mapping> hibernate.cfg.xml <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE hibernate-configuration PUBLIC "-//Hibernate/Hibernate Configuration DTD 3.0//EN" "http://hibernate.sourceforge.net/hibernate-configuration-3.0.dtd"> <hibernate-configuration> <session-factory> <property name="hibernate.connection.driver_class">org.postgresql.Driver</property> <property name="hibernate.connection.password">bar</property> <property name="hibernate.connection.url">jdbc:postgresql:postgres/tommy</property> <property name="hibernate.connection.username">foo</property> <property name="hibernate.dialect">org.hibernate.dialect.PostgreSQLDialect</property> <property name="show_sql">true</property> <property name="log4j.logger.org.hibernate.type">DEBUG</property> <mapping resource="hello/Message.hbm.xml"/> </session-factory> </hibernate-configuration> Main package hello; import org.hibernate.Session; import org.hibernate.SessionFactory; import org.hibernate.Transaction; import org.hibernate.cfg.Configuration; public class App { public static void main(String[] args) { SessionFactory sessionFactory = new Configuration().configure() .buildSessionFactory(); Message message = new Message(); message.setText("Hello Cruel World"); message.setId(2L); Session session = null; Transaction transaction = null; try { session = sessionFactory.openSession(); transaction = session.beginTransaction(); session.save(message); } catch (Exception e) { System.out.println("Exception attemtping to Add message: " + e.getMessage()); } finally { if (session != null && session.isOpen()) { if (transaction != null) transaction.commit(); session.flush(); session.close(); } } } } Table structure: foo=# \d messages Table "public.messages" Column | Type | Modifiers -------------+---------+----------- id | integer | messagetext | text | Eclipse console output when I run it Apr 28, 2010 11:13:53 PM org.hibernate.cfg.Environment <clinit> INFO: Hibernate 3.5.1-Final Apr 28, 2010 11:13:53 PM org.hibernate.cfg.Environment <clinit> INFO: hibernate.properties not found Apr 28, 2010 11:13:53 PM org.hibernate.cfg.Environment buildBytecodeProvider INFO: Bytecode provider name : javassist Apr 28, 2010 11:13:53 PM org.hibernate.cfg.Environment <clinit> INFO: using JDK 1.4 java.sql.Timestamp handling Apr 28, 2010 11:13:53 PM org.hibernate.cfg.Configuration configure INFO: configuring from resource: /hibernate.cfg.xml Apr 28, 2010 11:13:53 PM org.hibernate.cfg.Configuration getConfigurationInputStream INFO: Configuration resource: /hibernate.cfg.xml Apr 28, 2010 11:13:53 PM org.hibernate.cfg.Configuration addResource INFO: Reading mappings from resource : hello/Message.hbm.xml Apr 28, 2010 11:13:54 PM org.hibernate.cfg.HbmBinder bindRootPersistentClassCommonValues INFO: Mapping class: hello.Message -> public.messages Apr 28, 2010 11:13:54 PM org.hibernate.cfg.Configuration doConfigure INFO: Configured SessionFactory: null Apr 28, 2010 11:13:54 PM org.hibernate.connection.DriverManagerConnectionProvider configure INFO: Using Hibernate built-in connection pool (not for production use!) Apr 28, 2010 11:13:54 PM org.hibernate.connection.DriverManagerConnectionProvider configure INFO: Hibernate connection pool size: 20 Apr 28, 2010 11:13:54 PM org.hibernate.connection.DriverManagerConnectionProvider configure INFO: autocommit mode: false Apr 28, 2010 11:13:54 PM org.hibernate.connection.DriverManagerConnectionProvider configure INFO: using driver: org.postgresql.Driver at URL: jdbc:postgresql:postgres/tommy Apr 28, 2010 11:13:54 PM org.hibernate.connection.DriverManagerConnectionProvider configure INFO: connection properties: {user=foo, password=****} Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: RDBMS: PostgreSQL, version: 8.4.3 Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: JDBC driver: PostgreSQL Native Driver, version: PostgreSQL 8.4 JDBC4 (build 701) Apr 28, 2010 11:13:54 PM org.hibernate.dialect.Dialect <init> INFO: Using dialect: org.hibernate.dialect.PostgreSQLDialect Apr 28, 2010 11:13:54 PM org.hibernate.engine.jdbc.JdbcSupportLoader useContextualLobCreation INFO: Disabling contextual LOB creation as createClob() method threw error : java.lang.reflect.InvocationTargetException Apr 28, 2010 11:13:54 PM org.hibernate.transaction.TransactionFactoryFactory buildTransactionFactory INFO: Using default transaction strategy (direct JDBC transactions) Apr 28, 2010 11:13:54 PM org.hibernate.transaction.TransactionManagerLookupFactory getTransactionManagerLookup INFO: No TransactionManagerLookup configured (in JTA environment, use of read-write or transactional second-level cache is not recommended) Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Automatic flush during beforeCompletion(): disabled Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Automatic session close at end of transaction: disabled Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: JDBC batch size: 15 Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: JDBC batch updates for versioned data: disabled Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Scrollable result sets: enabled Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: JDBC3 getGeneratedKeys(): enabled Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Connection release mode: auto Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Default batch fetch size: 1 Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Generate SQL with comments: disabled Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Order SQL updates by primary key: disabled Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Order SQL inserts for batching: disabled Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory createQueryTranslatorFactory INFO: Query translator: org.hibernate.hql.ast.ASTQueryTranslatorFactory Apr 28, 2010 11:13:54 PM org.hibernate.hql.ast.ASTQueryTranslatorFactory <init> INFO: Using ASTQueryTranslatorFactory Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Query language substitutions: {} Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: JPA-QL strict compliance: disabled Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Second-level cache: enabled Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Query cache: disabled Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory createRegionFactory INFO: Cache region factory : org.hibernate.cache.impl.NoCachingRegionFactory Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Optimize cache for minimal puts: disabled Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Structured second-level cache entries: disabled Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Echoing all SQL to stdout Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Statistics: disabled Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Deleted entity synthetic identifier rollback: disabled Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Default entity-mode: pojo Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Named query checking : enabled Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Check Nullability in Core (should be disabled when Bean Validation is on): enabled Apr 28, 2010 11:13:54 PM org.hibernate.impl.SessionFactoryImpl <init> INFO: building session factory Apr 28, 2010 11:13:55 PM org.hibernate.impl.SessionFactoryObjectFactory addInstance INFO: Not binding factory to JNDI, no JNDI name configured Hibernate: insert into public.messages (messagetext, id) values (?, ?) Apr 28, 2010 11:13:55 PM org.hibernate.util.JDBCExceptionReporter logExceptions WARNING: SQL Error: 0, SQLState: 42P01 Apr 28, 2010 11:13:55 PM org.hibernate.util.JDBCExceptionReporter logExceptions SEVERE: Batch entry 0 insert into public.messages (messagetext, id) values ('Hello Cruel World', '2') was aborted. Call getNextException to see the cause. Apr 28, 2010 11:13:55 PM org.hibernate.util.JDBCExceptionReporter logExceptions WARNING: SQL Error: 0, SQLState: 42P01 Apr 28, 2010 11:13:55 PM org.hibernate.util.JDBCExceptionReporter logExceptions SEVERE: ERROR: relation "public.messages" does not exist Position: 13 Apr 28, 2010 11:13:55 PM org.hibernate.event.def.AbstractFlushingEventListener performExecutions SEVERE: Could not synchronize database state with session org.hibernate.exception.SQLGrammarException: Could not execute JDBC batch update at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:92) at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:66) at org.hibernate.jdbc.AbstractBatcher.executeBatch(AbstractBatcher.java:275) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:263) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:179) at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:321) at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:51) at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1206) at org.hibernate.impl.SessionImpl.managedFlush(SessionImpl.java:375) at org.hibernate.transaction.JDBCTransaction.commit(JDBCTransaction.java:137) at hello.App.main(App.java:31) Caused by: java.sql.BatchUpdateException: Batch entry 0 insert into public.messages (messagetext, id) values ('Hello Cruel World', '2') was aborted. Call getNextException to see the cause. at org.postgresql.jdbc2.AbstractJdbc2Statement$BatchResultHandler.handleError(AbstractJdbc2Statement.java:2569) at org.postgresql.core.v3.QueryExecutorImpl$1.handleError(QueryExecutorImpl.java:459) at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1796) at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:407) at org.postgresql.jdbc2.AbstractJdbc2Statement.executeBatch(AbstractJdbc2Statement.java:2708) at org.hibernate.jdbc.BatchingBatcher.doExecuteBatch(BatchingBatcher.java:70) at org.hibernate.jdbc.AbstractBatcher.executeBatch(AbstractBatcher.java:268) ... 8 more Exception in thread "main" org.hibernate.exception.SQLGrammarException: Could not execute JDBC batch update at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:92) at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:66) at org.hibernate.jdbc.AbstractBatcher.executeBatch(AbstractBatcher.java:275) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:263) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:179) at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:321) at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:51) at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1206) at org.hibernate.impl.SessionImpl.managedFlush(SessionImpl.java:375) at org.hibernate.transaction.JDBCTransaction.commit(JDBCTransaction.java:137) at hello.App.main(App.java:31) Caused by: java.sql.BatchUpdateException: Batch entry 0 insert into public.messages (messagetext, id) values ('Hello Cruel World', '2') was aborted. Call getNextException to see the cause. at org.postgresql.jdbc2.AbstractJdbc2Statement$BatchResultHandler.handleError(AbstractJdbc2Statement.java:2569) at org.postgresql.core.v3.QueryExecutorImpl$1.handleError(QueryExecutorImpl.java:459) at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1796) at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:407) at org.postgresql.jdbc2.AbstractJdbc2Statement.executeBatch(AbstractJdbc2Statement.java:2708) at org.hibernate.jdbc.BatchingBatcher.doExecuteBatch(BatchingBatcher.java:70) at org.hibernate.jdbc.AbstractBatcher.executeBatch(AbstractBatcher.java:268) ... 8 more PostgreSQL log file 2010-04-28 23:13:55 EEST LOG: execute S_1: BEGIN 2010-04-28 23:13:55 EEST ERROR: relation "public.messages" does not exist at character 13 2010-04-28 23:13:55 EEST STATEMENT: insert into public.messages (messagetext, id) values ($1, $2) 2010-04-28 23:13:55 EEST LOG: unexpected EOF on client connection If I copy/paste the query into the postgre command line and put the values in and ; after it, it works. Everything is lowercase, so I don't think that it's that issue. If I switch to MySQL, the same code same project (I only change driver,URL, authentication), it works. In Eclipse Datasource Explorer, I can ping the DB and it succeeds. Weird thing is that I can't see the tables from there either. It expands the public schema but it doesn't expand the tables. Could it be some permission issue? Thanks!

    Read the article

  • Spring transaction : Transaction not active

    - by Videanu Adrian
    i develop a app using struts2, spring 3.1, Jpa2 and Hibernate. From Spring i use transactions and IoC. so, i have an ajax code block that calls for a struts2 action every second (this is happening for every user that is logged into application (simultaneous users are around 20-30 at a time)). this action name is PopupAction public class PopupAction extends VActionBase implements ServletRequestAware { private static final long serialVersionUID = -293004532677112584L; private iIntermedService intermedService; private HttpServletRequest servletRequest; @Override public String execute() { Integer agentId = (Integer) session.get("USER_AGENT_ID"); Intermed iObj; try { iObj = intermedService.getIntermed(agentId,locationsString); } catch (Exception e) { logger.error("Cannot get Intermed!!! "+e.getMessage()); return ERROR; } return SUCCESS; } } and then i have the service class : @Transactional(readOnly=true) public class IntermedServiceImpl extends GenericIService<Intermed, Integer> implements iIntermedService { @Override public Intermed getIntermed (int agentId,String queueIds) throws Exception { Intermed intermedObj = null; //TODO - find a better implementation for this queueIds parameter!!!! try{ String sql = "SELECT i FROM bla bla bla.....)"; Query q = this.em.createQuery(sql); List<Intermed> iList = q.getResultList(); if (iList.size() == 1){ intermedObj = (Intermed) iList.get(0); //get latest object from DB em.refresh(intermedObj); } }catch(Exception e){ e.printStackTrace(); logger.error(e.getCause()+e.getMessage()); throw e; } return intermedObj; } } here is the spring configuration : <bean id="emfI" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="dataSource" ref="inboundDS" /> <property name="persistenceUnitName" value="I2PU"/> <!-- GlassFish load-time weaving setup --> <property name="loadTimeWeaver"> <bean class="org.springframework.instrument.classloading.glassfish.GlassFishLoadTimeWeaver"/> </property> </bean> <tx:annotation-driven transaction-manager="txManagerI" /> <tx:advice id="txManagerInboundAdvice" transaction-manager="txManagerI"> <tx:attributes> <tx:method name="*" rollback-for="java.lang.Exception"/> </tx:attributes> </tx:advice> I have names for transactionManager because i have 3 datasources and 3 transaction managers. the problem is that my glassfish logs are full of messages like these: -- removed in order to be able to add more recent logs -- So the cause is : Caused by: java.lang.IllegalStateException: Transaction not active. But i have no idea what can cause this. Any help ? thanks Updates So i have added to @Transactional annotation the transaction manager name that he has to use, but this still does not solved my problem. I have captured a log from the time that the transaction is created until i got that exception: 2012-02-08T15:08:55.954+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (AbstractBeanFactory.java:245) - Returning cached instance of singleton bean 'txManagerVA' 2012-02-08T15:08:55.962+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (AbstractPlatformTransactionManager.java:365) - Creating new transaction with name [xxx.vs.common.services.inbound.IntermedServiceImpl.getIntermed]: PROPAGATION_REQUIRED,ISOLATION_DEFAULT,readOnly; '',-java.lang.Exception 2012-02-08T15:08:55.967+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (JpaTransactionManager.java:368) - Opened new EntityManager [org.hibernate.ejb.EntityManagerImpl@edf83f9] for JPA transaction 2012-02-08T15:08:55.976+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (JpaTransactionManager.java:400) - Exposing JPA transaction as JDBC transaction [org.springframework.orm.jpa.vendor.HibernateJpaDialect$HibernateConnectionHandle@725b979b] 2012-02-08T15:08:55.977+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (TransactionSynchronizationManager.java:193) - Bound value [org.springframework.jdbc.datasource.ConnectionHolder@4fb57177] for key [com.sun.gjc.spi.jdbc40.DataSource40@75fa4851] to thread [thread-pool-1-80(80)] 2012-02-08T15:08:55.978+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (TransactionSynchronizationManager.java:193) - Bound value [org.springframework.orm.jpa.EntityManagerHolder@112c6483] for key [org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean@47d4f12f] to thread [thread-pool-1-80(80)] 2012-02-08T15:08:55.979+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (TransactionSynchronizationManager.java:272) - Initializing transaction synchronization 2012-02-08T15:08:55.980+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (TransactionAspectSupport.java:362) - Getting transaction for [xxx.vs.common.services.inbound.IntermedServiceImpl.getIntermed] 2012-02-08T15:08:55.983+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (ExtendedEntityManagerCreator.java:423) - Starting resource local transaction on application-managed EntityManager [org.hibernate.ejb.EntityManagerImpl@46d002f4] 2012-02-08T15:08:55.984+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (TransactionSynchronizationManager.java:193) - Bound value [org.springframework.orm.jpa.ExtendedEntityManagerCreator$ExtendedEntityManagerSynchronization@797add43] for key [org.hibernate.ejb.EntityManagerImpl@46d002f4] to thread [thread-pool-1-80(80)] 2012-02-08T15:08:55.986+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (ExtendedEntityManagerCreator.java:400) - Joined local transaction 2012-02-08T15:08:55.991+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (TransactionAspectSupport.java:391) - Completing transaction for [xxx.vs.common.services.inbound.IntermedServiceImpl.getIntermed] 2012-02-08T15:08:55.992+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (AbstractPlatformTransactionManager.java:922) - Triggering beforeCommit synchronization 2012-02-08T15:08:55.994+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (AbstractPlatformTransactionManager.java:935) - Triggering beforeCompletion synchronization 2012-02-08T15:08:56.001+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (TransactionSynchronizationManager.java:243) - Removed value [org.springframework.orm.jpa.ExtendedEntityManagerCreator$ExtendedEntityManagerSynchronization@797add43] for key [org.hibernate.ejb.EntityManagerImpl@46d002f4] from thread [thread-pool-1-80(80)] 2012-02-08T15:08:56.002+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (AbstractPlatformTransactionManager.java:752) - Initiating transaction commit 2012-02-08T15:08:56.003+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (JpaTransactionManager.java:507) - Committing JPA transaction on EntityManager [org.hibernate.ejb.EntityManagerImpl@edf83f9] 2012-02-08T15:08:56.008+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (AbstractPlatformTransactionManager.java:948) - Triggering afterCommit synchronization 2012-02-08T15:08:56.010+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (AbstractPlatformTransactionManager.java:964) - Triggering afterCompletion synchronization 2012-02-08T15:08:56.011+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (TransactionSynchronizationManager.java:331) - Clearing transaction synchronization 2012-02-08T15:08:56.012+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (TransactionSynchronizationManager.java:243) - Removed value [org.springframework.orm.jpa.EntityManagerHolder@112c6483] for key [org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean@47d4f12f] from thread [thread-pool-1-80(80)] 2012-02-08T15:08:56.021+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (TransactionSynchronizationManager.java:243) - Removed value [org.springframework.jdbc.datasource.ConnectionHolder@4fb57177] for key [com.sun.gjc.spi.jdbc40.DataSource40@75fa4851] from thread [thread-pool-1-80(80)] 2012-02-08T15:08:56.021+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (JpaTransactionManager.java:593) - Closing JPA EntityManager [org.hibernate.ejb.EntityManagerImpl@edf83f9] after transaction 2012-02-08T15:08:56.022+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (EntityManagerFactoryUtils.java:343) - Closing JPA EntityManager 2012-02-08T15:08:56.023+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|ERROR [thread-pool-1-80(80)] (PopupAction.java:39) - Cannot get Intermed!!! Transaction not active; nested exception is java.lang.IllegalStateException: Transaction not active 2012-02-08T15:08:56.024+0200|SEVERE||_ThreadID=184;_ThreadName=Thread-5;|org.springframework.dao.InvalidDataAccessApiUsageException: Transaction not active; nested exception is java.lang.IllegalStateException: Transaction not active at org.springframework.orm.jpa.EntityManagerFactoryUtils.convertJpaAccessExceptionIfPossible(EntityManagerFactoryUtils.java:298) at org.springframework.orm.jpa.vendor.HibernateJpaDialect.translateExceptionIfPossible(HibernateJpaDialect.java:106) at org.springframework.orm.jpa.ExtendedEntityManagerCreator$ExtendedEntityManagerSynchronization.convertException(ExtendedEntityManagerCreator.java:501) at org.springframework.orm.jpa.ExtendedEntityManagerCreator$ExtendedEntityManagerSynchronization.afterCommit(ExtendedEntityManagerCreator.java:481) at org.springframework.transaction.support.TransactionSynchronizationUtils.invokeAfterCommit(TransactionSynchronizationUtils.java:133) at org.springframework.transaction.support.TransactionSynchronizationUtils.triggerAfterCommit(TransactionSynchronizationUtils.java:121) at org.springframework.transaction.support.AbstractPlatformTransactionManager.triggerAfterCommit(AbstractPlatformTransactionManager.java:950) at org.springframework.transaction.support.AbstractPlatformTransactionManager.processCommit(AbstractPlatformTransactionManager.java:796) at org.springframework.transaction.support.AbstractPlatformTransactionManager.commit(AbstractPlatformTransactionManager.java:723) at org.springframework.transaction.interceptor.TransactionAspectSupport.commitTransactionAfterReturning(TransactionAspectSupport.java:393) at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:120) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172) at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:202) at $Proxy325.getIntermed(Unknown Source) at xxx.vs.common.actions.PopupAction.execute(PopupAction.java:37) at sun.reflect.GeneratedMethodAccessor1581.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at com.opensymphony.xwork2.DefaultActionInvocation.invokeAction(DefaultActionInvocation.java:453) at com.opensymphony.xwork2.DefaultActionInvocation.invokeActionOnly(DefaultActionInvocation.java:292) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:255) at org.apache.struts2.interceptor.debugging.DebuggingInterceptor.intercept(DebuggingInterceptor.java:256) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at com.opensymphony.xwork2.interceptor.DefaultWorkflowInterceptor.doIntercept(DefaultWorkflowInterceptor.java:176) at com.opensymphony.xwork2.interceptor.MethodFilterInterceptor.intercept(MethodFilterInterceptor.java:98) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at com.opensymphony.xwork2.validator.ValidationInterceptor.doIntercept(ValidationInterceptor.java:265) at org.apache.struts2.interceptor.validation.AnnotationValidationInterceptor.doIntercept(AnnotationValidationInterceptor.java:68) at com.opensymphony.xwork2.interceptor.MethodFilterInterceptor.intercept(MethodFilterInterceptor.java:98) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at com.opensymphony.xwork2.interceptor.ConversionErrorInterceptor.intercept(ConversionErrorInterceptor.java:138) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at com.opensymphony.xwork2.interceptor.ParametersInterceptor.doIntercept(ParametersInterceptor.java:211) at com.opensymphony.xwork2.interceptor.MethodFilterInterceptor.intercept(MethodFilterInterceptor.java:98) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at com.opensymphony.xwork2.interceptor.ParametersInterceptor.doIntercept(ParametersInterceptor.java:211) at com.opensymphony.xwork2.interceptor.MethodFilterInterceptor.intercept(MethodFilterInterceptor.java:98) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at com.opensymphony.xwork2.interceptor.StaticParametersInterceptor.intercept(StaticParametersInterceptor.java:190) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at org.apache.struts2.interceptor.MultiselectInterceptor.intercept(MultiselectInterceptor.java:75) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at org.apache.struts2.interceptor.CheckboxInterceptor.intercept(CheckboxInterceptor.java:90) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at org.apache.struts2.interceptor.FileUploadInterceptor.intercept(FileUploadInterceptor.java:243) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at com.opensymphony.xwork2.interceptor.ModelDrivenInterceptor.intercept(ModelDrivenInterceptor.java:100) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at com.opensymphony.xwork2.interceptor.ScopedModelDrivenInterceptor.intercept(ScopedModelDrivenInterceptor.java:141) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at com.opensymphony.xwork2.interceptor.ChainingInterceptor.intercept(ChainingInterceptor.java:145) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at com.opensymphony.xwork2.interceptor.PrepareInterceptor.doIntercept(PrepareInterceptor.java:171) at com.opensymphony.xwork2.interceptor.MethodFilterInterceptor.intercept(MethodFilterInterceptor.java:98) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at com.opensymphony.xwork2.interceptor.I18nInterceptor.intercept(I18nInterceptor.java:176) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at org.apache.struts2.interceptor.ServletConfigInterceptor.intercept(ServletConfigInterceptor.java:164) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at com.opensymphony.xwork2.interceptor.AliasInterceptor.intercept(AliasInterceptor.java:192) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at com.opensymphony.xwork2.interceptor.ExceptionMappingInterceptor.intercept(ExceptionMappingInterceptor.java:187) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at xxx.vs.common.utils.AuthenticationInterceptor.intercept(AuthenticationInterceptor.java:78) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at com.googlecode.sslplugin.interceptors.SSLInterceptor.intercept(SSLInterceptor.java:128) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at org.apache.struts2.impl.StrutsActionProxy.execute(StrutsActionProxy.java:54) at org.apache.struts2.dispatcher.Dispatcher.serviceAction(Dispatcher.java:510) at org.apache.struts2.dispatcher.ng.ExecuteOperations.executeAction(ExecuteOperations.java:77) at org.apache.struts2.dispatcher.ng.filter.StrutsPrepareAndExecuteFilter.doFilter(StrutsPrepareAndExecuteFilter.java:91) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:256) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:217) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:279) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:175) at org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:655) at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:595) at com.sun.enterprise.web.WebPipeline.invoke(WebPipeline.java:98) at com.sun.enterprise.web.PESessionLockingStandardPipeline.invoke(PESessionLockingStandardPipeline.java:91) at org.apache.catalina 2012-02-08T15:08:56.024+0200|SEVERE||_ThreadID=184;_ThreadName=Thread-5;|.core.StandardHostValve.invoke(StandardHostValve.java:162) at org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:330) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:231) at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:174) at com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:828) at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:725) at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:1019) at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:225) at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:137) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:104) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:90) at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:79) at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:54) at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:59) at com.sun.grizzly.ContextTask.run(ContextTask.java:71) at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:532) at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:513) at java.lang.Thread.run(Thread.java:679) Caused by: java.lang.IllegalStateException: Transaction not active at org.hibernate.ejb.TransactionImpl.commit(TransactionImpl.java:69) at org.springframework.orm.jpa.ExtendedEntityManagerCreator$ExtendedEntityManagerSynchronization.afterCommit(ExtendedEntityManagerCreator.java:478) ... 93 more so again..... any ideea ?

    Read the article

< Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >