Search Results

Search found 8770 results on 351 pages for 'sun zfs storage 7000 appl'.

Page 19/351 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • Can We Survive the Sun’s Death?

    - by Akemi Iwaya
    In the distant future, our sun will begin its descent into death after using up all of the hydrogen fuel in its core. When that happens, the inner parts of our solar system will suffer horrible consequences. But what will happen at that point in time and how quickly will things ‘deteriorate’? Is there anything that could be done to help our planet survive? AsapSCIENCE looks at this ‘hot’ topic in their latest video. Can We Survive The Sun’s Death?     

    Read the article

  • Nexenta/OpenSolaris filer kernel panic/crash

    - by ewwhite
    I've an x4540 Sun storage server running NexentaStor Enterprise. It's serving NFS over 10GbE CX4 for several VMWare vSphere hosts. There are 30 virtual machines running. For the past few weeks, I've had random crashes spaced 10-14 days apart. This system used to open OpenSolaris and was stable in that arrangement. The crashes trigger the automated system recovery feature on the hardware, forcing a hard system reset. Here's the output from mdb debugger: panic[cpu5]/thread=ffffff003fefbc60: Deadlock: cycle in blocking chain ffffff003fefb570 genunix:turnstile_block+795 () ffffff003fefb5d0 unix:mutex_vector_enter+261 () ffffff003fefb630 zfs:dbuf_find+5d () ffffff003fefb6c0 zfs:dbuf_hold_impl+59 () ffffff003fefb700 zfs:dbuf_hold+2e () ffffff003fefb780 zfs:dmu_buf_hold+8e () ffffff003fefb820 zfs:zap_lockdir+6d () ffffff003fefb8b0 zfs:zap_update+5b () ffffff003fefb930 zfs:zap_increment+9b () ffffff003fefb9b0 zfs:zap_increment_int+68 () ffffff003fefba10 zfs:do_userquota_update+8a () ffffff003fefba70 zfs:dmu_objset_do_userquota_updates+de () ffffff003fefbaf0 zfs:dsl_pool_sync+112 () ffffff003fefbba0 zfs:spa_sync+37b () ffffff003fefbc40 zfs:txg_sync_thread+247 () ffffff003fefbc50 unix:thread_start+8 () Any ideas what this means?

    Read the article

  • Running 32-bit Firefox with sun-jre in 64-bit Ubuntu

    - by rojanu
    I am trying to run juniper networks connect program to vpn into work and it only works on 32bit sun jre. All the things I have found with google failed so far. I can't use any scripts, like madscientists, as part of the authentication I need to provide some random characters from a grid. So to isolate this 32bit app install to a corner, I downloaded firefox and jre and unpack them to /opt. I run firefox with sudo as Juniper asks for root password. Here is Firefox plugins folder /ot/firefox32/plugins# ls -la total 8 drwxr-xr-x 2 root root 4096 Mar 11 00:57 . drwxr-xr-x 11 root root 4096 Mar 10 23:48 .. lrwxrwxrwx 1 root root 49 Mar 11 00:57 libnpjp2.so -> /opt/java/32/jdk1.6.0_31/jre/lib/i386/libnpjp2.so Firefox lists sun jre but when check it with "http://java.com/en/download/installed.jsp" it either can't detect java or Firefox freezes Any Ideas? Thanks

    Read the article

  • Restoring an Ubuntu Server using ZFS RAID-Z for data

    - by andybjackson
    Having become disillusioned with hacking Buffalo NAS devices, I've decided to roll my own Home server. After some research, I have settled on an HP Proliant Microserver with Ubuntu Server and ZFS (OS on 1 Ext4 disk, Data on 3 RAID-Z disks). As Joel Spolsky and Geoff Atwood say with regards to backup, I can't rest until I have done a restore in all of the failure scenarios that I am seeking to protect against. Q: How to configure Ubuntu Server to recognise a pre-existing RAID-Z array? Clearly if one of the data disks die - then that is a resilvering scenario, which is well documented. If two of the data disks die, then I am into regular backup/restore land. If the OS dies and I can restore, also an easy scenario. But if the OS dies and I can't restore, then I need to recreate an Ubuntu server. But how do I get this to recognise my RAID-Z array? Is the necessary configuration information stored within and across the RAID-Z array and simply need to be found (if so, how)? Or does it reside on the OS ext4 disk (in which case how do I recreate it)?

    Read the article

  • NEW - Oracle Certifications and Documentation Available for Pre-Acquisition Sun/BEA IdM Products

    - by Irina
    If you have been looking for Oracle certification information or documentation for the pre-Acquisition Sun/BEA Identity Management products, you can now find them at the Certifications Central Hub.Use this Hub if you're looking for Sun Identity Management documentation, certified configurations for Waveset, Identity Analytics, OpenSSO, and more. Scroll down, below the bullets, to the bottom of the table to find: Of course, you can still find a great wealth of certification information for current products at this hub, as in the past. Be sure to check before you install! In case you haven't used this page before, notice that you can get to the documentation, certifications and downloads for IdM products by clicking on "Identity Management" in the leftmost pane. In the new screen, you will see each IdM product, along with tabs for Downloads, Documentation, Community, and Learn More. Let us know if you don't find what you are looking for. Happy Trails.

    Read the article

  • SUN Java and GPL V2 licence issue for linked applications

    - by user255607
    I have recently noticed that the Sun Java code has been released as GPL V2 code (see Google code repo). Does that mean that all applications written in Java and linked to Sun APIs (beans, and so on) are also in GPL ? If we strictly follow the GPL then this should be the case (Libs should be in LGPL, not GPL if we want to build proprietary software on top of it). Is there another commercial licence which can avoid such an issue ? I cannot believe this is the case. There must be an explanation on this. Regards, Apple92

    Read the article

  • (My) Sun Ray 3i

    - by user13346636
    Last week, some Sun Ray devices were shown at the LASDEC exhibition. Afterward, they were brought back to the Aoyama Center, but not all of them found a place to be stored. So, two days ago, Iwasaki-san, one of the co-workers I've been close to (and who was at the exhibition), put a Sun Ray 3i (all-in-one with 21.5" screen) on my (shared) desk. Yay! I managed to get a Japanese keyboard, and now I can access my card and cardless sessions from Germany, and the performance is just great, as good as when I work from home in Hamburg. That's the way my deskt looks now,almost as messy as my desk in Hamburg: And my back is very grateful.

    Read the article

  • Announcing: Sun Server X4-8

    - by uwes
    On June 3rd, Oracle announced the latest in its line of workload-specific, enterprise-class servers, the Sun Server X4-8. Co-engineered with Oracle software, these servers are based on the latest Intel® Xeon® E7-8895 v2 processor and are the first to include elastic computing features, maximizing performance by adapting to different workload demands in real time. This server is currently available for quoting, ordering and shipping. This product replaces the Sun Server X2-8.  Please read the Product Bulletin on Oracle HW TRC for more details. (If you are not registered on Oracle HW TRC, click here ... and follow the instructions..) For More Server Information: Why Buy How to Use x86 Server Family oracle.com OTN

    Read the article

  • Oracle OpenWorld ?? Oracle|Sun

    - by user13137902
    Oracle|Sun @ Oracle OpenWorld Tokyo 2012 ?? Solaris?????@Oracle Develop@Tokyo 2011 ??????????????????Oracle|Sun????????????? ??????????????????????????????(^ ^;) ?????? Oracle OpenWorld Tokyo 2012 ??4?4???4?6??3??? ???????? ????? ??????????????????? ???&?????????? Oracle OpenWorld Tokyo 2012 ???? ???&????? ????????????????????????????????????? ???Exa*???????????????????????????????? ????????????????????????????? ????Oracle OpenWorld Tokyo 2012???? Oracle|Sun ?????????????????????? ??????????????? K1-014/4(?) 9:00-11:15 ENGINEERED FOR INNOVATION ??????????????????·???????? ?????? ???·??? ????·???????? ???????·????????? ???·????? ????·???????? ???·??????·?????? ?????·?????? ?????????? ??????? ??????? ?? ?? S1-014/4(?) 11:50-12:35 Oracle Engineered Systems Strategy-?????????????????????????????????????·???????? ????·????????? ????·??? S1-334/4(?) 15:20-16:05 ??????????????????????????????????????????????????????????? ????????????????? ?? ?? S1-424/4(?) 16:30-17:15 ?????Engineered Systems?????????????IT?????????????? ????????????????? ?? ? K2-014/5(?) 9:30-11:15 Extreme Innovation????·???????? ???????(CEO) ???·???? G2-014/5(?) 11:50-13:20 ??????&???????????IT??????????????·???????? ???????·????????? ???·???? S2-424/5(?) 16:30-17:15 ??UNIX??????????-SPARC SuperCluster?????????? ????????????????? ?? ? S2-534/5(?) 17:40-18:25 Oracle E-Business Suite????????????????????/??????????????????????”SPARC SuperCluster”?????????? ????????????????? ?? ?? S3-134/6(?) 13:00-13:45 ?????SNS??????????????? Sun ZFS Storage Appliance ????????????????? ???????? ?? ?? S3-214/6(?) 14:10-14:55 ???????????????????????????????????????? ???????? ????? ?? ?? ? S3-334/6(?) 15:20-16:05 ????????·????????????????????????????????????????? ???? ?????????? ?? ??(????) ?? ?? ? 1?????????????K1-01?????? ????????????????????????????????????????? ??????????????????? ??????1???????·?????????? ???????Solaris???????????? ???????????????S1-01?Engineered System??????????? S1-33?????????????????????????????????????? S1-42?????????????????????????????????? ???????·?????????K2-01?????????? ?????????????????????????? ???????????????????????????????????? ?????????????????????? ??????????????????????? ??????????????????????????? ????????Engineered System? ??????????????????????????? ??G2-01??????·???????????? ???????????????????????? ??????????????????????? S2-42??????????????? SPARC?????Engineered System???SPARC SuperCluster T4-4????????? ???????????????????????????????? S2-53?????SuperCluster???? ???????????SuperCluster??????????????? ??????????????????????????????????? ????????????Oracle Optimized Solution?????? ?????????Oracle Develop???????? ????????????ZFS Storage Appliance????3????????????? ??????????????????????????????? ?????????????????????????? ???ZFS Storage Appliance???????????????????? ????????????? ???? ?????????????? ??????????????7324????????? (?????????????????? facebook ????????????? ????????????)? ????????????????????????????????????????????

    Read the article

  • What is Best storage servers infrastructure ? DAS/NAS/SAN or installing GlusterFS/LUSTER/HDFS/RBDB

    - by TORr0t
    I am trying to design an infrastucture for the project I am working on. It would be somehow a file-sharing/downloading project (like rapidshare) and I would need high storage sizes and good scability, and I would add new storage nodes after my project grows up. I have come up with 3 solutions for my project which are using Luster, GlusterFS, HDFS, RDBD. For start, i would have 2 servers, one server is for glusterfs client + webserver + db server+ a streaming server, and the other server is gluster storage node. (After sometime, i would be adding more node servers, and client servers (dont know how many new client new servers to add, will see later) So, i am thinking to work with glusterfs. But i really wonder that if i have to use high performance servers with high sotrage sizes or avarage/slow servers with high storage sizes? Or nas/das/san solutions are better for glusterfs storage nodes? I might buy a nas and install glusterfs onto it. I would be happy to listen to your recommendations for the server properties (for each clients and nodes) . I really dont know if I really need high amount of ram and good cpus to for the nodes. I am sure i need it for client servers. The files would be streamed as well, so the Automatic file replication is important, thus, my system should work like a cloud, when needed, according to high traffic, the storage nodes should copy the most demanded file to be streamed and would help me to get rid of scability problems and my visitors would able to stream/download those files. Also, i am open to your experiences/thoughts about any good solution. Luster, hdfs, rbdb are the other options and i would be happy to listen to your thoughts here. I would be very very happy to hear back from anyone commented of any words I have used here. Thanks

    Read the article

  • Since upgrading to Solaris 11, my ARC size has consistently targeted 119MB, despite having 30GB RAM. What? Why?

    - by growse
    I ran a NAS/SAN box on Solaris 11 Express before Solaris 11 was released. The box is an HP X1600 with an attached D2700. In all, 12x 1TB 7200 SATA disks, 12x 300GB 10k SAS disks in separate zpools. Total RAM is 30GB. Services provided are CIFS, NFS and iSCSI. All was well, and I had a ZFS memory usage graph looking like this: A fairly healthy Arc size of around 23GB - making use of the available memory for caching. However, I then upgraded to Solaris 11 when that came out. Now, my graph looks like this: Partial output of arc_summary.pl is: System Memory: Physical RAM: 30701 MB Free Memory : 26719 MB LotsFree: 479 MB ZFS Tunables (/etc/system): ARC Size: Current Size: 915 MB (arcsize) Target Size (Adaptive): 119 MB (c) Min Size (Hard Limit): 64 MB (zfs_arc_min) Max Size (Hard Limit): 29677 MB (zfs_arc_max) It's targetting 119MB while sitting at 915MB. It's got 30GB to play with. Why? Did they change something? Edit To clarify, arc_summary.pl is Ben Rockwood's, and the relevent lines generating the above stats are: my $mru_size = ${Kstat}->{zfs}->{0}->{arcstats}->{p}; my $target_size = ${Kstat}->{zfs}->{0}->{arcstats}->{c}; my $arc_min_size = ${Kstat}->{zfs}->{0}->{arcstats}->{c_min}; my $arc_max_size = ${Kstat}->{zfs}->{0}->{arcstats}->{c_max}; my $arc_size = ${Kstat}->{zfs}->{0}->{arcstats}->{size}; The Kstat entries are there, I'm just getting odd values out of them. Edit 2 I've just re-measured the arc size with arc_summary.pl - I've verified these numbers with kstat: System Memory: Physical RAM: 30701 MB Free Memory : 26697 MB LotsFree: 479 MB ZFS Tunables (/etc/system): ARC Size: Current Size: 744 MB (arcsize) Target Size (Adaptive): 119 MB (c) Min Size (Hard Limit): 64 MB (zfs_arc_min) Max Size (Hard Limit): 29677 MB (zfs_arc_max) The thing that strikes me is that the Target Size is 119MB. Looking at the graph, it's targeted the exact same value (124.91M according to cacti, 119M according to arc_summary.pl - think the difference is just 1024/1000 rounding issues) ever since Solaris 11 was installed. It looks like the kernel's making zero effort to shift the target size to anything different. The current size is fluctuating as the needs of the system (large) fight with the target size, and it appears equilibrium is between 700 and 1000MB. So the question is now a little more pointed - why is Solaris 11 hard setting my ARC target size to 119MB, and how do I change it? Should I raise the min size to see what happens? I've stuck the output of kstat -n arcstats over at http://pastebin.com/WHPimhfg Edit 3 Ok, weirdness now. I know flibflob mentioned that there was a patch to fix this. I haven't applied this patch yet (still sorting out internal support issues) and I've not applied any other software updates. Last thursday, the box crashed. As in, completely stopped responding to everything. When I rebooted it, it came back up fine, but here's what my graph now looks like. It seems to have fixed the problem. This is proper la la land stuff now. I've literally no idea what's going on. :(

    Read the article

  • JavaFX FXML communication between Application and Controller classes

    - by likethesky
    I am trying to get and destroy an external process I've created via ProcessBuilder in my FXML application close, but it's not working. This is based on the helpful advice Sergey Grinev gave me here. I have tried running with/without the "// myController.setApp(this);" and with "// super.stop();" at top of subclass and at bottom (see commented out/in for that line in MyApp), but no combination works. This probably isn't related to FXML or JavaFX, though I imagine this is a common pattern for developing apps on JavaFX. I suppose I'm asking for a Java best practice for closing dependent processes in a UI-based app like this one (in this case: FXML / JavaFX based), where there is a controller class and an application class. Can you explain what I'm doing wrong? Or better: advise what I should be doing instead? Thanks. In my Application I do this: public class MyApp extends Application { @Override public void start(Stage primaryStage) throws Exception { FXMLLoader fxmlLoader = new FXMLLoader(); Scene scene = (Scene)FXMLLoader.load(getClass().getResource("MyApp.fxml")); MyAppController myController = (MyAppController)fxmlLoader.getController(); primaryStage.setScene(scene); primaryStage.show(); // myController.setApp(this); } @Override public void stop() throws Exception { // super.stop(); // this is called on fx app close, you may call it in an action handler too if (MyAppController.getScriptProcess() != null) { MyAppController.getScriptProcess().destroy(); } super.stop(); } public static void main(String[] args) { launch(args); } } In my Controller I do this: public class MyAppController implements Initializable { private Application app; private static Process scriptProcess; public void setApp(Application a) { app = a; } public static Process getScriptProcess() { return scriptProcess; } } The result when I run with the "commented-out setApp()" not commented out (that is, left in the start method), is the following, immediately upon launch (the main Scene flashes, then disappears, then this dialog appears: "JavaFX Launcher Error: Exception while running Application" And it gives an, "Exception in Application start method" in the console as well. The result when I leave out the "commented-out code" in my MyApp above (that is, remove the "setApp()" from the start method), is that my app does indeed close, but gives this error when it closes: Exception in thread "JavaFX Application Thread" java.lang.RuntimeException: java.lang.reflect.InvocationTargetException at javafx.fxml.FXMLLoader$ControllerMethodEventHandler.handle(FXMLLoader.java:1440) at com.sun.javafx.event.CompositeEventHandler.dispatchBubblingEvent(CompositeEventHandler.java:69) at com.sun.javafx.event.EventHandlerManager.dispatchBubblingEvent(EventHandlerManager.java:217) at com.sun.javafx.event.EventHandlerManager.dispatchBubblingEvent(EventHandlerManager.java:170) at com.sun.javafx.event.CompositeEventDispatcher.dispatchBubblingEvent(CompositeEventDispatcher.java:38) at com.sun.javafx.event.BasicEventDispatcher.dispatchEvent(BasicEventDispatcher.java:37) at com.sun.javafx.event.EventDispatchChainImpl.dispatchEvent(EventDispatchChainImpl.java:92) at com.sun.javafx.event.BasicEventDispatcher.dispatchEvent(BasicEventDispatcher.java:35) at com.sun.javafx.event.EventDispatchChainImpl.dispatchEvent(EventDispatchChainImpl.java:92) at com.sun.javafx.event.BasicEventDispatcher.dispatchEvent(BasicEventDispatcher.java:35) at com.sun.javafx.event.EventDispatchChainImpl.dispatchEvent(EventDispatchChainImpl.java:92) at com.sun.javafx.event.EventUtil.fireEventImpl(EventUtil.java:53) at com.sun.javafx.event.EventUtil.fireEvent(EventUtil.java:28) at javafx.event.Event.fireEvent(Event.java:171) at javafx.scene.Node.fireEvent(Node.java:6863) at javafx.scene.control.Button.fire(Button.java:179) at com.sun.javafx.scene.control.behavior.ButtonBehavior.mouseReleased(ButtonBehavior.java:193) at com.sun.javafx.scene.control.skin.SkinBase$4.handle(SkinBase.java:336) at com.sun.javafx.scene.control.skin.SkinBase$4.handle(SkinBase.java:329) at com.sun.javafx.event.CompositeEventHandler.dispatchBubblingEvent(CompositeEventHandler.java:64) at com.sun.javafx.event.EventHandlerManager.dispatchBubblingEvent(EventHandlerManager.java:217) at com.sun.javafx.event.EventHandlerManager.dispatchBubblingEvent(EventHandlerManager.java:170) at com.sun.javafx.event.CompositeEventDispatcher.dispatchBubblingEvent(CompositeEventDispatcher.java:38) at com.sun.javafx.event.BasicEventDispatcher.dispatchEvent(BasicEventDispatcher.java:37) at com.sun.javafx.event.EventDispatchChainImpl.dispatchEvent(EventDispatchChainImpl.java:92) at com.sun.javafx.event.BasicEventDispatcher.dispatchEvent(BasicEventDispatcher.java:35) at com.sun.javafx.event.EventDispatchChainImpl.dispatchEvent(EventDispatchChainImpl.java:92) at com.sun.javafx.event.BasicEventDispatcher.dispatchEvent(BasicEventDispatcher.java:35) at com.sun.javafx.event.EventDispatchChainImpl.dispatchEvent(EventDispatchChainImpl.java:92) at com.sun.javafx.event.BasicEventDispatcher.dispatchEvent(BasicEventDispatcher.java:35) at com.sun.javafx.event.EventDispatchChainImpl.dispatchEvent(EventDispatchChainImpl.java:92) at com.sun.javafx.event.EventUtil.fireEventImpl(EventUtil.java:53) at com.sun.javafx.event.EventUtil.fireEvent(EventUtil.java:33) at javafx.event.Event.fireEvent(Event.java:171) at javafx.scene.Scene$MouseHandler.process(Scene.java:3324) at javafx.scene.Scene$MouseHandler.process(Scene.java:3164) at javafx.scene.Scene$MouseHandler.access$1900(Scene.java:3119) at javafx.scene.Scene.impl_processMouseEvent(Scene.java:1559) at javafx.scene.Scene$ScenePeerListener.mouseEvent(Scene.java:2261) at com.sun.javafx.tk.quantum.GlassViewEventHandler.handleMouseEvent(GlassViewEventHandler.java:228) at com.sun.glass.ui.View.handleMouseEvent(View.java:528) at com.sun.glass.ui.View.notifyMouse(View.java:922) at com.sun.glass.ui.gtk.GtkApplication._runLoop(Native Method) at com.sun.glass.ui.gtk.GtkApplication$3$1.run(GtkApplication.java:82) at java.lang.Thread.run(Thread.java:722) Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at javafx.fxml.FXMLLoader$ControllerMethodEventHandler.handle(FXMLLoader.java:1435) ... 44 more Caused by: java.lang.NullPointerException at mypackage.MyController.handleCancel(MyController.java:300) ... 49 more Clean up...

    Read the article

  • Which is the fastest way to move 1Petabyte from one storage to a new one?

    - by marc.riera
    First of all, thanks for reading, and sorry for asking something related to my job. I understand that this is something that I should solve by myself but as you will see its something a bit difficult. A small description: Now Storage = 1PB using DDN S2A9900 storage for the OSTs, 4 OSS , 10 GigE network. (lustre 1.6) 100 compute nodes with 2x Infiniband 1 infiniband switch with 36 ports After Storage = Previous storage + another 1PB using DDN S2A 990 or LSI E5400 (still to decide) (lustre 2.0) 8 OSS , 10GigE network 100 compute nodes with 2x Infiniband Previous experience: transfered 120 TB in less than 3 days using following command: tar -C /old --record-size 2048 -b 2048 -cf - dir | tar -C /new --record-size 2048 -b 2048 -xvf - 2>&1 | tee /tmp/dir.log So , big problem here, using big mathematical equations I conclude that we are going to need 1 month to transfer the data from one side to the new one. During this time the researchers will need to step back, and I'm personally not happy with this. I'm telling you that we have infiniband connections because I think that may be there is a chance to use it to transfer the data using 18 compute nodes (18 * 2 IB = 36 ports) to transfer the data from one storage to the other. I'm trying to figure out if the IB switch will handle all the traffic but in case it just burn up will go faster than using 10GigE. Also, having lustre 1.6 and 2.0 agents on same server works quite well, with this there is no need to go by 1.8 to upgrade the metadata servers with two steps. Any ideas? Many thanks Note 1: Zoredache, we can divide it in two blocks (A)600Tb and (B)400Tb. The idea is to move (A) to new storage which is lustre2.0 formated, then format where (A) was with lustre2.0 and move (B) to this lustre2.0 block and extend with the space where (B) was. This way we will end with (A) and (B) on separate filesystems, with 1PB each.

    Read the article

  • What are the typical methods used to scale up/out email storage servers?

    - by nareshov
    Hi, What I've tried: I have two email storage architectures. Old and new. Old: courier-imapds on several (18+) 1TB-storage servers. If one of them show signs of running out of disk space, we migrate a few email accounts to another server. the servers don't have replicas. no backups either. New: dovecot2 on a single huge server with 16TB (SATA) storage and a few SSDs we store fresh mails on the SSDs and run a doveadm purge to move mails older than a day to the SATA disks there is an identical server which has a max-15min-old rsync backup from the primary server higher-ups/management wanted to pack in as much storage as possible per server in order to minimise the cost of SSDs per server the rsync'ing is done because GlusterFS wasn't replicating well under that high small/random-IO. scaling out was expected to be done with provisioning another pair of such huge servers on facing disk-crunch issues like in the old architecture, manual moving of email accounts would be done. Concerns/doubts: I'm not convinced with the synchronously-replicated filesystem idea works well for heavy random/small-IO. GlusterFS isn't working for us yet, I'm not sure if there's another filesystem out there for this use case. The idea was to keep identical pairs and use DNS round-robin for email delivery and IMAP/POP3 access. And if one the servers went down for whatever reasons (planned/unplanned), we'd move the IP to the other server in the pair. In filesystems like Lustre, I get the advantage of a single namespace whereby I do not have to worry about manually migrating accounts around and updating MAILHOME paths and other metadata/data. Questions: What are the typical methods used to scale up/out with the traditional software (courier-imapd / dovecot)? Do traditional software that store on a locally mounted filesystem pose a roadblock to scale out with minimal "problems"? Does one have to re-write (parts of) these to work with an object-storage of some sort - such as OpenStack object storage?

    Read the article

  • How to restore your production database without needing additional storage

    - by David Atkinson
    Production databases can get very large. This in itself is to be expected, but when a copy of the database is needed the database must be restored, requiring additional and costly storage.  For example, if you want to give each developer a full copy of your production server, you'll need n times the storage cost for your n-developer team. The same is true for any test databases that are created during the course of your project lifecycle. If you've read my previous blog posts, you'll be aware that I've been focusing on the database continuous integration theme. In my CI setup I create a "production"-equivalent database directly from its source control representation, and use this to test my upgrade scripts. Despite this being a perfectly valid and practical thing to do as part of a CI setup, it's not the exact equivalent to running the upgrade script on a copy of the actual production database. So why shouldn't I instead simply restore the most recent production backup as part of my CI process? There are two reasons why this would be impractical. 1. My CI environment isn't an exact copy of my production environment. Indeed, this would be the case in a perfect world, and it is strongly recommended as a good practice if you follow Jez Humble and David Farley's "Continuous Delivery" teachings, but in practical terms this might not always be possible, especially where storage is concerned. It may just not be possible to restore a huge production database on the environment you've been allotted. 2. It's not just about the storage requirements, it's also the time it takes to do the restore. The whole point of continuous integration is that you are alerted as early as possible whether the build (yes, the database upgrade script counts!) is broken. If I have to run an hour-long restore each time I commit a change to source control I'm just not going to get the feedback quickly enough to react. So what's the solution? Red Gate has a technology, SQL Virtual Restore, that is able to restore a database without using up additional storage. Although this sounds too good to be true, the explanation is quite simple (although I'm sure the technical implementation details under the hood are quite complex!) Instead of restoring the backup in the conventional sense, SQL Virtual Restore will effectively mount the backup using its HyperBac technology. It creates a data and log file, .vmdf, and .vldf, that becomes the delta between the .bak file and the virtual database. This means that both read and write operations are permitted on a virtual database as from SQL Server's point of view it is no different from a conventional database. Instead of doubling the storage requirements upon a restore, there is no 'duplicate' storage requirements, other than the trivially small virtual log and data files (see illustration below). The benefit is magnified the more databases you mount to the same backup file. This technique could be used to provide a large development team a full development instance of a large production database. It is also incredibly easy to set up. Once SQL Virtual Restore is installed, you simply run a conventional RESTORE command to create the virtual database. This is what I have running as part of a nightly "release test" process triggered by my CI tool. RESTORE DATABASE WidgetProduction_virtual FROM DISK=N'C:\WidgetWF\ProdBackup\WidgetProduction.bak' WITH MOVE N'WidgetProduction' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_WidgetProduction_Virtual.vmdf', MOVE N'WidgetProduction_log' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_log_WidgetProduction_Virtual.vldf', NORECOVERY, STATS=1, REPLACE GO RESTORE DATABASE mydatabase WITH RECOVERY   Note the only change from what you would do normally is the naming of the .vmdf and .vldf files. SQL Virtual Restore intercepts this by monitoring the extension and applies its magic, ensuring the 'virtual' restore happens rather than the conventional storage-heavy restore. My automated release test then applies the upgrade scripts to the virtual production database and runs some validation tests, giving me confidence that were I to run this on production for real, all would go smoothly. For illustration, here is my 8Gb production database: And its corresponding backup file: Here are the .vldf and .vmdf files, which represent the only additional used storage for the new database following the virtual restore.   The beauty of this product is its simplicity. Once it is installed, the interaction with the backup and virtual database is exactly the same as before, as the clever stuff is being done at a lower level. SQL Virtual Restore can be downloaded as a fully functional 14-day trial. Technorati Tags: SQL Server

    Read the article

  • How to restore your production database without needing additional storage

    - by David Atkinson
    Production databases can get very large. This in itself is to be expected, but when a copy of the database is needed the database must be restored, requiring additional and costly storage.  For example, if you want to give each developer a full copy of your production server, you’ll need n times the storage cost for your n-developer team. The same is true for any test databases that are created during the course of your project lifecycle. If you’ve read my previous blog posts, you’ll be aware that I’ve been focusing on the database continuous integration theme. In my CI setup I create a “production”-equivalent database directly from its source control representation, and use this to test my upgrade scripts. Despite this being a perfectly valid and practical thing to do as part of a CI setup, it’s not the exact equivalent to running the upgrade script on a copy of the actual production database. So why shouldn’t I instead simply restore the most recent production backup as part of my CI process? There are two reasons why this would be impractical. 1. My CI environment isn’t an exact copy of my production environment. Indeed, this would be the case in a perfect world, and it is strongly recommended as a good practice if you follow Jez Humble and David Farley’s “Continuous Delivery” teachings, but in practical terms this might not always be possible, especially where storage is concerned. It may just not be possible to restore a huge production database on the environment you’ve been allotted. 2. It’s not just about the storage requirements, it’s also the time it takes to do the restore. The whole point of continuous integration is that you are alerted as early as possible whether the build (yes, the database upgrade script counts!) is broken. If I have to run an hour-long restore each time I commit a change to source control I’m just not going to get the feedback quickly enough to react. So what’s the solution? Red Gate has a technology, SQL Virtual Restore, that is able to restore a database without using up additional storage. Although this sounds too good to be true, the explanation is quite simple (although I’m sure the technical implementation details under the hood are quite complex!) Instead of restoring the backup in the conventional sense, SQL Virtual Restore will effectively mount the backup using its HyperBac technology. It creates a data and log file, .vmdf, and .vldf, that becomes the delta between the .bak file and the virtual database. This means that both read and write operations are permitted on a virtual database as from SQL Server’s point of view it is no different from a conventional database. Instead of doubling the storage requirements upon a restore, there is no ‘duplicate’ storage requirements, other than the trivially small virtual log and data files (see illustration below). The benefit is magnified the more databases you mount to the same backup file. This technique could be used to provide a large development team a full development instance of a large production database. It is also incredibly easy to set up. Once SQL Virtual Restore is installed, you simply run a conventional RESTORE command to create the virtual database. This is what I have running as part of a nightly “release test” process triggered by my CI tool. RESTORE DATABASE WidgetProduction_Virtual FROM DISK=N'D:\VirtualDatabase\WidgetProduction.bak' WITH MOVE N'WidgetProduction' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_WidgetProduction_Virtual.vmdf', MOVE N'WidgetProduction_log' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_log_WidgetProduction_Virtual.vldf', NORECOVERY, STATS=1, REPLACE GO RESTORE DATABASE WidgetProduction_Virtual WITH RECOVERY   Note the only change from what you would do normally is the naming of the .vmdf and .vldf files. SQL Virtual Restore intercepts this by monitoring the extension and applies its magic, ensuring the ‘virtual’ restore happens rather than the conventional storage-heavy restore. My automated release test then applies the upgrade scripts to the virtual production database and runs some validation tests, giving me confidence that were I to run this on production for real, all would go smoothly. For illustration, here is my 8Gb production database: And its corresponding backup file: Here are the .vldf and .vmdf files, which represent the only additional used storage for the new database following the virtual restore.   The beauty of this product is its simplicity. Once it is installed, the interaction with the backup and virtual database is exactly the same as before, as the clever stuff is being done at a lower level. SQL Virtual Restore can be downloaded as a fully functional 14-day trial. Technorati Tags: SQL Server

    Read the article

  • How to restore your production database without needing additional storage

    - by David Atkinson
    Production databases can get very large. This in itself is to be expected, but when a copy of the database is needed the database must be restored, requiring additional and costly storage.  For example, if you want to give each developer a full copy of your production server, you'll need n times the storage cost for your n-developer team. The same is true for any test databases that are created during the course of your project lifecycle. If you've read my previous blog posts, you'll be aware that I've been focusing on the database continuous integration theme. In my CI setup I create a "production"-equivalent database directly from its source control representation, and use this to test my upgrade scripts. Despite this being a perfectly valid and practical thing to do as part of a CI setup, it's not the exact equivalent to running the upgrade script on a copy of the actual production database. So why shouldn't I instead simply restore the most recent production backup as part of my CI process? There are two reasons why this would be impractical. 1. My CI environment isn't an exact copy of my production environment. Indeed, this would be the case in a perfect world, and it is strongly recommended as a good practice if you follow Jez Humble and David Farley's "Continuous Delivery" teachings, but in practical terms this might not always be possible, especially where storage is concerned. It may just not be possible to restore a huge production database on the environment you've been allotted. 2. It's not just about the storage requirements, it's also the time it takes to do the restore. The whole point of continuous integration is that you are alerted as early as possible whether the build (yes, the database upgrade script counts!) is broken. If I have to run an hour-long restore each time I commit a change to source control I'm just not going to get the feedback quickly enough to react. So what's the solution? Red Gate has a technology, SQL Virtual Restore, that is able to restore a database without using up additional storage. Although this sounds too good to be true, the explanation is quite simple (although I'm sure the technical implementation details under the hood are quite complex!) Instead of restoring the backup in the conventional sense, SQL Virtual Restore will effectively mount the backup using its HyperBac technology. It creates a data and log file, .vmdf, and .vldf, that becomes the delta between the .bak file and the virtual database. This means that both read and write operations are permitted on a virtual database as from SQL Server's point of view it is no different from a conventional database. Instead of doubling the storage requirements upon a restore, there is no 'duplicate' storage requirements, other than the trivially small virtual log and data files (see illustration below). The benefit is magnified the more databases you mount to the same backup file. This technique could be used to provide a large development team a full development instance of a large production database. It is also incredibly easy to set up. Once SQL Virtual Restore is installed, you simply run a conventional RESTORE command to create the virtual database. This is what I have running as part of a nightly "release test" process triggered by my CI tool. RESTORE DATABASE WidgetProduction_virtual FROM DISK=N'C:\WidgetWF\ProdBackup\WidgetProduction.bak' WITH MOVE N'WidgetProduction' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_WidgetProduction_Virtual.vmdf', MOVE N'WidgetProduction_log' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_log_WidgetProduction_Virtual.vldf', NORECOVERY, STATS=1, REPLACE GO RESTORE DATABASE mydatabase WITH RECOVERY   Note the only change from what you would do normally is the naming of the .vmdf and .vldf files. SQL Virtual Restore intercepts this by monitoring the extension and applies its magic, ensuring the 'virtual' restore happens rather than the conventional storage-heavy restore. My automated release test then applies the upgrade scripts to the virtual production database and runs some validation tests, giving me confidence that were I to run this on production for real, all would go smoothly. For illustration, here is my 8Gb production database: And its corresponding backup file: Here are the .vldf and .vmdf files, which represent the only additional used storage for the new database following the virtual restore.   The beauty of this product is its simplicity. Once it is installed, the interaction with the backup and virtual database is exactly the same as before, as the clever stuff is being done at a lower level. SQL Virtual Restore can be downloaded as a fully functional 14-day trial. Technorati Tags: SQL Server

    Read the article

  • Glassfish4 throw exception when I declare validation.xml file on classpath

    - by Rafael Ruiz Tabares
    I've tried to declare a custom validator for @NotNull constraint and Glassfish4 throw this exception when find /META-INF/validation.xml. Project works fine if I omit this file. Exception while dispatching an event java.lang.IllegalStateException: Singleton not set for WebappClassLoader(delegate=true; repositories=WEB-INF/classes/) at org.glassfish.weld.ACLSingletonProvider$ACLSingleton.get(ACLSingletonProvider.java:110) at org.jboss.weld.Container.instance(Container.java:54) at org.jboss.weld.bootstrap.WeldBootstrap.shutdown(WeldBootstrap.java:644) at org.glassfish.weld.WeldDeployer.doBootstrapShutdown(WeldDeployer.java:309) at org.glassfish.weld.WeldDeployer.event(WeldDeployer.java:220) at org.glassfish.kernel.event.EventsImpl.send(EventsImpl.java:131) at org.glassfish.internal.data.ApplicationInfo.load(ApplicationInfo.java:328) at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:493) at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:219) at org.glassfish.deployment.admin.DeployCommand.execute(DeployCommand.java:491) at com.sun.enterprise.v3.admin.CommandRunnerImpl$2$1.run(CommandRunnerImpl.java:527) at com.sun.enterprise.v3.admin.CommandRunnerImpl$2$1.run(CommandRunnerImpl.java:523) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:356) at com.sun.enterprise.v3.admin.CommandRunnerImpl$2.execute(CommandRunnerImpl.java:522) at com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:546) at com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:1423) at com.sun.enterprise.v3.admin.CommandRunnerImpl.access$1500(CommandRunnerImpl.java:108) at com.sun.enterprise.v3.admin.CommandRunnerImpl$ExecutionContext.execute(CommandRunnerImpl.java:1762) at com.sun.enterprise.v3.admin.CommandRunnerImpl$ExecutionContext.execute(CommandRunnerImpl.java:1674) at org.glassfish.admin.rest.resources.admin.CommandResource.executeCommand(CommandResource.java:396) at org.glassfish.admin.rest.resources.admin.CommandResource.execCommandSimpInMultOut(CommandResource.java:234) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory$1.invoke(ResourceMethodInvocationHandlerFactory.java:81) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:125) at org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$ResponseOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:152) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:91) at org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:346) at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:341) at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:101) at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:224) at org.glassfish.jersey.internal.Errors$1.call(Errors.java:271) at org.glassfish.jersey.internal.Errors$1.call(Errors.java:267) at org.glassfish.jersey.internal.Errors.process(Errors.java:315) at org.glassfish.jersey.internal.Errors.process(Errors.java:297) at org.glassfish.jersey.internal.Errors.process(Errors.java:267) at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:317) at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:198) at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:946) at org.glassfish.jersey.grizzly2.httpserver.GrizzlyHttpContainer.service(GrizzlyHttpContainer.java:331) at org.glassfish.admin.rest.adapter.JerseyContainerCommandService$3.service(JerseyContainerCommandService.java:165) at org.glassfish.admin.rest.adapter.RestAdapter.service(RestAdapter.java:181) at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:246) at org.glassfish.grizzly.http.server.HttpHandler.runService(HttpHandler.java:191) at org.glassfish.grizzly.http.server.HttpHandler.doHandle(HttpHandler.java:168) at org.glassfish.grizzly.http.server.HttpServerFilter.handleRead(HttpServerFilter.java:189) at org.glassfish.grizzly.filterchain.ExecutorResolver$9.execute(ExecutorResolver.java:119) at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeFilter(DefaultFilterChain.java:288) at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeChainPart(DefaultFilterChain.java:206) at org.glassfish.grizzly.filterchain.DefaultFilterChain.execute(DefaultFilterChain.java:136) at org.glassfish.grizzly.filterchain.DefaultFilterChain.process(DefaultFilterChain.java:114) at org.glassfish.grizzly.ProcessorExecutor.execute(ProcessorExecutor.java:77) at org.glassfish.grizzly.nio.transport.TCPNIOTransport.fireIOEvent(TCPNIOTransport.java:838) at org.glassfish.grizzly.strategies.AbstractIOStrategy.fireIOEvent(AbstractIOStrategy.java:113) at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.run0(WorkerThreadIOStrategy.java:115) at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.access$100(WorkerThreadIOStrategy.java:55) at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy$WorkerThreadRunnable.run(WorkerThreadIOStrategy.java:135) at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:564) at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.run(AbstractThreadPool.java:544) at java.lang.Thread.run(Thread.java:744) ]] [2014-06-09T19:37:52.476+0200] [glassfish 4.0] [SEVERE] [AS-WEB-CORE-00108] [javax.enterprise.web.core] [tid: _ThreadID=32 _ThreadName=admin-listener(1)] [timeMillis: 1402335472476] [levelValue: 1000] [[ ContainerBase.addChild: start: org.apache.catalina.LifecycleException: java.lang.IllegalArgumentException: javax.servlet.ServletException: com.sun.enterprise.container.common.spi.util.InjectionException: Error creating managed object for class: class org.jboss.weld.servlet.WeldListener at org.apache.catalina.core.StandardContext.start(StandardContext.java:5864) at com.sun.enterprise.web.WebModule.start(WebModule.java:691) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:1041) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:1024) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:747) at com.sun.enterprise.web.WebContainer.loadWebModule(WebContainer.java:2278) at com.sun.enterprise.web.WebContainer.loadWebModule(WebContainer.java:1924) at com.sun.enterprise.web.WebApplication.start(WebApplication.java:139) at org.glassfish.internal.data.EngineRef.start(EngineRef.java:122) at org.glassfish.internal.data.ModuleInfo.start(ModuleInfo.java:291) at org.glassfish.internal.data.ApplicationInfo.start(ApplicationInfo.java:352) at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:497) at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:219) at org.glassfish.deployment.admin.DeployCommand.execute(DeployCommand.java:491) at com.sun.enterprise.v3.admin.CommandRunnerImpl$2$1.run(CommandRunnerImpl.java:527) at com.sun.enterprise.v3.admin.CommandRunnerImpl$2$1.run(CommandRunnerImpl.java:523) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:356) at com.sun.enterprise.v3.admin.CommandRunnerImpl$2.execute(CommandRunnerImpl.java:522) at com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:546) at com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:1423) at com.sun.enterprise.v3.admin.CommandRunnerImpl.access$1500(CommandRunnerImpl.java:108) at com.sun.enterprise.v3.admin.CommandRunnerImpl$ExecutionContext.execute(CommandRunnerImpl.java:1762) at com.sun.enterprise.v3.admin.CommandRunnerImpl$ExecutionContext.execute(CommandRunnerImpl.java:1674) at org.glassfish.admin.rest.resources.admin.CommandResource.executeCommand(CommandResource.java:396) at org.glassfish.admin.rest.resources.admin.CommandResource.execCommandSimpInMultOut(CommandResource.java:234) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory$1.invoke(ResourceMethodInvocationHandlerFactory.java:81) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:125) at org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$ResponseOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:152) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:91) at org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:346) at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:341) at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:101) at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:224) at org.glassfish.jersey.internal.Errors$1.call(Errors.java:271) at org.glassfish.jersey.internal.Errors$1.call(Errors.java:267) at org.glassfish.jersey.internal.Errors.process(Errors.java:315) at org.glassfish.jersey.internal.Errors.process(Errors.java:297) at org.glassfish.jersey.internal.Errors.process(Errors.java:267) at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:317) at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:198) at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:946) at org.glassfish.jersey.grizzly2.httpserver.GrizzlyHttpContainer.service(GrizzlyHttpContainer.java:331) at org.glassfish.admin.rest.adapter.JerseyContainerCommandService$3.service(JerseyContainerCommandService.java:165) at org.glassfish.admin.rest.adapter.RestAdapter.service(RestAdapter.java:181) at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:246) at org.glassfish.grizzly.http.server.HttpHandler.runService(HttpHandler.java:191) at org.glassfish.grizzly.http.server.HttpHandler.doHandle(HttpHandler.java:168) at org.glassfish.grizzly.http.server.HttpServerFilter.handleRead(HttpServerFilter.java:189) at org.glassfish.grizzly.filterchain.ExecutorResolver$9.execute(ExecutorResolver.java:119) at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeFilter(DefaultFilterChain.java:288) at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeChainPart(DefaultFilterChain.java:206) at org.glassfish.grizzly.filterchain.DefaultFilterChain.execute(DefaultFilterChain.java:136) at org.glassfish.grizzly.filterchain.DefaultFilterChain.process(DefaultFilterChain.java:114) at org.glassfish.grizzly.ProcessorExecutor.execute(ProcessorExecutor.java:77) at org.glassfish.grizzly.nio.transport.TCPNIOTransport.fireIOEvent(TCPNIOTransport.java:838) at org.glassfish.grizzly.strategies.AbstractIOStrategy.fireIOEvent(AbstractIOStrategy.java:113) at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.run0(WorkerThreadIOStrategy.java:115) at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.access$100(WorkerThreadIOStrategy.java:55) at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy$WorkerThreadRunnable.run(WorkerThreadIOStrategy.java:135) at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:564) at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.run(AbstractThreadPool.java:544) at java.lang.Thread.run(Thread.java:744) Caused by: java.lang.IllegalArgumentException: javax.servlet.ServletException: com.sun.enterprise.container.common.spi.util.InjectionException: Error creating managed object for class: class org.jboss.weld.servlet.WeldListener at org.apache.catalina.core.StandardContext.addListener(StandardContext.java:3270) at org.apache.catalina.core.StandardContext.addApplicationListener(StandardContext.java:2476) at com.sun.enterprise.web.TomcatDeploymentConfig.configureApplicationListener(TomcatDeploymentConfig.java:251) at com.sun.enterprise.web.TomcatDeploymentConfig.configureWebModule(TomcatDeploymentConfig.java:110) at com.sun.enterprise.web.WebModuleContextConfig.start(WebModuleContextConfig.java:266) at org.apache.catalina.startup.ContextConfig.lifecycleEvent(ContextConfig.java:486) at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:163) at org.apache.catalina.core.StandardContext.start(StandardContext.java:5861) ... 66 more Caused by: javax.servlet.ServletException: com.sun.enterprise.container.common.spi.util.InjectionException: Error creating managed object for class: class org.jboss.weld.servlet.WeldListener at org.apache.catalina.core.StandardContext.createListener(StandardContext.java:3391) at org.apache.catalina.core.StandardContext.loadListener(StandardContext.java:5414) at com.sun.enterprise.web.WebModule.loadListener(WebModule.java:1788) at org.apache.catalina.core.StandardContext.addListener(StandardContext.java:3268) ... 73 more Caused by: com.sun.enterprise.container.common.spi.util.InjectionException: Error creating managed object for class: class org.jboss.weld.servlet.WeldListener at com.sun.enterprise.container.common.impl.util.InjectionManagerImpl.createManagedObject(InjectionManagerImpl.java:329) at com.sun.enterprise.web.WebContainer.createListenerInstance(WebContainer.java:1015) at com.sun.enterprise.web.WebModule.createListenerInstance(WebModule.java:2158) at org.apache.catalina.core.StandardContext.createListener(StandardContext.java:3389) ... 76 more Caused by: java.lang.NullPointerException at org.jboss.weld.bootstrap.WeldBootstrap.getManager(WeldBootstrap.java:435) at org.glassfish.weld.services.JCDIServiceImpl.createManagedObject(JCDIServiceImpl.java:320) at org.glassfish.weld.services.JCDIServiceImpl.createManagedObject(JCDIServiceImpl.java:263) at com.sun.enterprise.container.common.impl.managedbean.ManagedBeanManagerImpl.createManagedBean(ManagedBeanManagerImpl.java:485) at com.sun.enterprise.container.common.impl.managedbean.ManagedBeanManagerImpl.createManagedBean(ManagedBeanManagerImpl.java:439) at com.sun.enterprise.container.common.impl.util.InjectionManagerImpl.createManagedObject(InjectionManagerImpl.java:313) ... 79 more This is constraint xml file <constraint-definition annotation="org.hibernate.validator.constraints.NotNull"> <validated-by include-existing-validators="true"> <value>es.project.validator.customConstraint.NotEmptyValidator</value> </validated-by> </constraint-definition> And validation.xml <validation-config xmlns="http://jboss.org/xml/ns/javax/validation/configuration" xsi:schemaLocation="http://jboss.org/xml/ns/javax/validation/configuration validation-configuration-1.0.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <constraint-mapping>META-INF/validation/mapping.xml</constraint-mapping> Project's structure WEB-INF +----\classes +-------\META-INF ------- validation.xml ----------\validation +----------\mapping.xml Validator code import javax.validation.ConstraintValidator; import javax.validation.ConstraintValidatorContext; import javax.validation.constraints.NotNull; import org.hibernate.validator.constraintvalidation.HibernateConstraintValidatorContext; public class NotEmptyValidator implements ConstraintValidator<NotNull,Object> { @Override public void initialize(NotNull constraintAnnotation) { } @Override public boolean isValid(Object value, ConstraintValidatorContext context) { if(value.toString().isEmpty()){ ........... ........... ........... } return true; } }

    Read the article

  • Forward all traffic through an ssh tunnel

    - by Eamorr
    I hope someone can follow this and I'll explain as best I can. I'm trying to forward all traffic from port 6999 on x.x.x.224, through an ssh tunnel, and onto port 7000 on x.x.x.218. Here is some ASCII art: |browser|-----|Squid on x.x.x.224|------|ssh tunnel|------<satellite link>-----|Squid on x.x.x.218|-----|www| 3128 6999 7000 80 When I remove the ssh tunnel, everything works fine. The idea is to turn off encryption on the ssh tunnel (to save bandwidth) and turn on maximum compression (to save more bandwidth). This is because it's a satellite link. Here's the ssh tunnel I've been using: ssh -C -f -C -o CompressionLevel=9 -o Cipher=none [email protected] -L 7000:172.16.1.224:6999 -N The trouble is, I don't know how to get data from Squid on x.x.x.224 into the ssh tunnel? Am I going about this the wrong way? Should I create an ssh tunnel on x.x.x.218? I use iptables to stop squid on x.x.x.224 from reading port 80, but to feed from port 6999 instead (i.e. via the ssh tunnel). Do I need another iptables rule? Any comments greatly appreciated. Many thanks in advance, Regarding Eduardo Ivanec's question, here is a netstat -i any port 7000 -nn dump from x.x.x.218: 14:42:15.386462 IP 172.16.1.224.40006 > 172.16.1.218.7000: Flags [S], seq 2804513708, win 14600, options [mss 1460,sackOK,TS val 86702647 ecr 0,nop,wscale 4], length 0 14:42:15.386690 IP 172.16.1.218.7000 > 172.16.1.224.40006: Flags [R.], seq 0, ack 2804513709, win 0, length 0 Update 2: When I run the second command, I get the following error in my browser: ERROR The requested URL could not be retrieved The following error was encountered while trying to retrieve the URL: http://109.123.109.205/index.php Zero Sized Reply Squid did not receive any data for this request. Your cache administrator is webmaster. Generated Fri, 01 Jul 2011 16:06:06 GMT by remote-site (squid/2.7.STABLE9) remote-site is 172.16.1.224 When I do a tcpdump -i any port 7000 -nn I get the following: root@remote-site:~# tcpdump -i any port 7000 -nn tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on any, link-type LINUX_SLL (Linux cooked), capture size 65535 bytes channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >