Search Results

Search found 9066 results on 363 pages for 'product'.

Page 306/363 | < Previous Page | 302 303 304 305 306 307 308 309 310 311 312 313  | Next Page >

  • Oracle 11g Data Guard over a WAN

    - by Dave LeJeune
    Hi - We are in process of looking at using Oracle's Data Guard to replicate our 11g instance from a colo facility in Washington DC to Chicago. To give some basics we have approximately 25TB of storage and a healthy transaction rate in the 1-2K/sec range. Also, because we are processing data in real-time we have a 24x7x365 requirement for processing data. We don't have any respites as far as volume except for system upgrades (once every few months) where we take the system offline but then course experience a spike in transactions when we bring the system back on-line. Ideally we would want the second instance in the DG configuration semi-online in a read-only fashion for reports/etc. We evaluated DG in 10g and were not overly impressed and research seemed to show that earlier versions had issues with replication over a WAN but I have heard good things about modifications the product has gone through w/ 11g. Can anyone confirm an instance of this size and transaction rate being replicated over a WAN and if so what is the general latency? An information or experiences with a DG implementation that is of this size and scope would really be helpful (or larger - I also realize we are still relatively small compared to many others out there). Many thanks in advance.

    Read the article

  • Windows Server 2003 (w/Exchange) move to new machine

    - by James Booker
    I have an ageing domain controller (the only one on a 10-pc network) which needs rebooting often. I have a Dell Poweredge 2850 server doing nothing, so I'd like to move the DC to that, but here's the catch - I don't have Win2k Server Std install media any more as it's been lost. I purchased "Easus Todo Backup Advanced Server" which claims to be able to recover to dissimilar metal, but it's not quite working (although I don't think it's the product's fault) I know the server and PERC RAID card are good because I installed Ubuntu on the logical drive (4 x 72GB disks RAID 5) no problems. I've booted frmo the Easus Todo backup CD (which is WinPE based) and recovered to the logical disk on the RAID (after installing driver inside the WinPE environment from a NAS drive) The problem is when I boot the server, I can get the OS selection menu, but any option results in a blank screen, with no errors. I figure this is probably because the driver wasn't installed on the old machine (which is IDE-based (i know, i know!) and doesn;t have a RAID controller) I've booted from the CD and copied the mraid35x.sys file to the c:\windows\system32\drivers folder on the recovered system, but it makes no difference. I made a boot.ini with rdisks 0-10 defined, and booting from each of these resulted in a file error (i.e. 'this isn't a real disk') - the only disk that gets any response (the blank screen) is multi(0)disk(0)rdisk(0)partition(1) which just gives me the blank black screen and no disk activity. Is there any way I can force the drvier to be installed on the source system (so i can do a full backup again), i've tried right-clicking the oemsetup.inf and clicking install, but it didn't actually do anything. I attempted to force it with the 'Add new hardware' wizard and forcing with the 'have disk' option but it still gave me no hardware to select. Also I've got an identical machine running WinXP which uses the PERC driver successfully (which was obviously done at install time) and the boot.ini settings are the same : multi(0)disk(0)rdisk(0)partition(1) Any ideas would be appreciated.

    Read the article

  • LMDE detects a wireless card, but can't use it

    - by Davidos
    LMDE can see my wireless card, and correctly identifies it, but it refuses to let me turn it on through the shiny graphical interface. Is there a way to use the non-shiny terminal to turn on my wireless? One small tidbit I noticed was the stated version; it says the version is 00. I believe that's hexadecimal for 0, which may indicate something screwy with the software. It would be nice if someone could tell me how to figure out how to solve this kind of problem in the future. *-network DISABLED description: Wireless interface product: Centrino Wireless-N 1000 vendor: Intel Corporation physical id: 0 bus info: pci@0000:09:00.0 logical name: wlan0 version: 00 serial: 91:e3:7b:0d:a3:a9 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=iwlagn driverversion=3.0.0-1-amd64 firmware=39.31.5.1 build 35138 latency=0 link=no multicast=yes wireless=IEEE 802.11bgn resources: irq:43 memory:e1d00000-e1d01fff I've tested multiple other network managers, and none of them work. W ireless switch is on, I'm sure it's a driver problem.

    Read the article

  • 2.6.9 Kernel on virtual server (non upgradable) - any expected problems?

    - by chris_l
    Hi, I'm considering to rent a virtual server (for me personally). The product I'm currently looking at offers IMO fair pricing, very good hardware etc. The only problem is, that I won't be able to do an upgrade to a newer kernel than 2.6.9 (running Debian Etch). Also, I can't install my own kernel modules. (The server runs with Virtuozzo, so as far as I understand it, it just does some chroot instead of a real virtualization (?)) I want to run GlassFish, Postgres, Subversion, Trac and maybe some other things on it. It will also have to employ a firewall, and provide OpenSSL for https. Ideally, it would also be able to do AIO (asynchronous IO), which could speed up some server I/O. Should I expect problems with that old kernel version, in conjunction with the software I want to install (I'd like to use current versions of the software)? One thing I already found out, is that you can't do everything with iptables, since some kernel modules are missing/things are not build into the kernel. GlassFish v3 appears to run fine at first glance. I was able to test the server for a few hours. Installing my whole setup wasn't feasible in that time, but what I can say is, that it's amazingly fast for an entry-level vserver, especially hard disk and network performance (averaging at ca. 400MBit/s). So if the kernel won't be a problem, I'd really like to take it. Thanks, Chris PS Exact kernel version: 2.6.9-023stab051.3-smp

    Read the article

  • Removing Phantom Registry Entries

    - by shedd
    First, some background: I'm trying to install an HP OfficeJet 6500. I've got the printer setup on my network fine, but the driver software installation on Windows XP (SP3) is a PITA. The installation program keeps dying with the following error: Product: Network -- Error 1324. The folder path 'WD Sync Data' contains an invalid character. WD Sync Data is a program on external Western Digital harddrives, which I used to use on this computer, but no external drive is current mounted. I've searched my registry for those keywords, but haven't found anything. I also ran CCleaner on the registry just to make sure, but no loose ends detected there, either. There are a number of Google results for the error, but no solutions. One post pointed me to the Windows Installer Cleanup Utility, but this utility doesn't even run - it dies with the same error before even starting. Any thoughts on where I could look to clean up this invalid character so I can get the HP installation wizard to successfully run? Many thanks in advance!

    Read the article

  • Auto-restart mysql when it dies

    - by Los Frijoles
    I have a rackspace server that I have been renting to run my personal projects upon. Since I am cheap, it has 256Mb of RAM and honestly can't handle alot. Every once in a while, when there is a sharp uptick in traffic, the server decides to start killing processes and it seems that mysqld is a popular one for it to kill. I try to visit my site and am greeted with the message that there was an error establishing the database connection. Inspection of the logs reveals that mysqld was killed due to lack of memory. Since I am still as poor as I was yesterday and don't want to upgrade my rackspace VM's RAM, is there a way I can tell it to automagically restart mysqld when it dies? I have a thought to use something like crontab, but alas, I don't know exactly what to do there either. I guess I am product of the "Linux on your desktop" generation since I can do most things on my desktop and laptop (which run Linux almost exclusively), but still lack a lot of server administration skills for Linux. The server runs CentOS 6.3

    Read the article

  • can't save or create files in external hard disk

    - by Rodniko
    i formatted my computer and installed new win7. i connected my external hard disk (usb connector) and i have some kind of permission problem. i can't save files after opening them and right clicking and choosing "new" shows that i can only create folders. what is wrong ? why doesn't the external hd doesn't have permission and how do i cahnge it? in microsoft they probably thought: "hmmm.... how would i make it difficult for the user to use our product..." , "we will have to make the difficulty as soon as the windows is installed...." " but how would we guarantee 100% for the user to have problems? "oh yhe! block creating files and saving them, yes!" i'm so tired of those guys... my HD is half a Terra, changing ownership of all the files inside will take a couple of hours , i need other ideas... if any...

    Read the article

  • IIS 7: launch unique site instance per host name

    - by OlduwanSteve
    Is it possible to configure IIS 7 so that a single site with multiple bindings (or wildcard bindings) will launch a unique instance for each unique host name? To explain why this is desirable, we have an application that retrieves its configuration from a remote system. The behaviour of the application is governed by this configuration and not by the 'web.config'. The application uses its host name as a key to retrieve the configuration. Currently it is a manual process to create an identical IIS site for each instance of the application, differing only by the bindings. My thought, if it were possible, is that it would be nice to have one IIS site that effectively works as a template for an arbitrary number of dynamic sites. Whenever it is accessed by a unique host name a new instance of the site would be launched, and all further requests to that host name would go to that instance just as though I had created the site by hand. I use IIS regularly, but only for fairly straightforward site hosting. I'd like to know if this could be configured with vanilla IIS 7, but would also welcome answers that require a plugin or 3rd party product. Programming/architectural suggestions about changes to the app wouldn't really be appropriate for serverfault.

    Read the article

  • OpenVPN / iptables restrict some access

    - by RitonLaJoie
    I want to create an openvpn service on a dedicated server I have, for some friends so that they are able to play online games faster. Is there an easy way to restrict which traffic I allow them with iptables ? It seems iptable is not very easy to maintain and we can easily get kicked out of our own server. Rebooting on a rescue mode every time I would get kicked out because of bad iptable rules would just be a pain. As far as I understand, the tun interface would be providing the access. Which kind of rule in iptables would I have to implement to restrict their access to only 1 ip ? Also, I don't want this vpn to be the default gateway for all the traffic. I guess I should go with the option of pushing a route to the clients so that they connect to the IP of the game server through the VPN and use their regular routes through their ISP for all the other traffic ? As a side not, it seems Openvpn AS is not very robust. Is there some other (commercial is ok) product that would give me the same administration options through a web interface ? Is Webmin the only other solution ? Thanks !

    Read the article

  • Ethernet/8P8C crimp contacts bent

    - by Fire Lancer
    (if anyone knows correct terminology please correct). Ive got a (fairly large) number of existing Ethernet cables that over the years many have got damaged connector clips, so got a crimp tool and some new connectors for them. However out of all 4 attempts I have tried, on crimping 2+ of the little copper contacts that bite into the wires have instead just bent to one side, and so gone between the gaps in in the crimp tool... Unless this really is me doing something wrong (what?) I am inclined to blame the hardware, but is this the crimper or the new connectors I got? I tried to take a picture, as you can just about see looking from the left 3rd, 6th, 7th and 8th pins didn't get pushed in, and so don't form a connector. Unfortunately my camera was barely able to focus on it and then this website converted it to a JPEG... Update: Connectors/Cable/Tools: The wires are stranded (looks about 6 and no evidence of being aluminum/not copper), and the pins(?) have 2 little flat spikes lengthways along the cables (I understand to dig into it, while solid core connectors would have like 2 plates designed to go around the core?). Crimper was http://www.amazon.co.uk/gp/product/B0013EXTKK/ref=oh_aui_detailpage_o02_s00?ie=UTF8&psc=1 (seemed to be highly rated, I already had tools for cutting/stripping). Update2: Picture of crimp "prongs" (?) Update3: Side picture of connector Update4: Comparison with old connector. The top (used) connector is one from a few years back (different tool and connectors), the thing that concerns me that it might not be the tool I need to replace is just how thin the pins are on the new one that maybe a tool could legitimately bend some into a gap rather than pushing them in fully? In fact I can move individual pins to the sides significantly with my fingernail, is that normal?

    Read the article

  • Private subnet for VM server host-only network

    - by Derek Pressnall
    At my current job, we distribute a product based on a Linux server with multiple VMs defined (using KVM / libvirt). We are planning to expose limited ports to the customer's network, and use iptables to direct inbound traffic to the appropriate internal VM. My question: is there a class of private subnets that I can use for the internal host-only network that is least likely to conflict with a client IP subnet? Specifically, if I choose a /24 out of any of the RFC-1918 defined private subnets (such as 192.168.x.x), there is a chance of conflicting with a customer-used range. I noticed that several current VM implementations default to 192.168.122.x -- is this due to an RFC that I'm not familiar with, and therefore this is a safe range to use (that most network admins would avoid)? Or did the various VM vendors just pick that range randomly? I guess I'm looking for an IP range that is more private than the existing private (RFC1918) addresses. The only other thought I had was to use one of the "Test Net" IP ranges reserved for documentation purposes (RFC 5737). Note, that I'm not worried about a customer's network blocking these IPs, as this is only internal to our server (packets get NATted before leaving the box). However this does seem more unorthodox than just sticking with the default 192.168.122.x/24 subnet.

    Read the article

  • Benchmark Linq2SQL, Subsonic2, Subsonic3 - Any other ideas to make them faster ?

    - by Aristos
    I am working with Subsonic 2 more than 3 years now... After Linq appears and then Subsonic 3, I start thinking about moving to the new Linq futures that are connected to sql. I must say that I start move and port my subsonic 2 with SubSonic 3, and very soon I discover that the speed was so slow thats I didn't believe it - and starts all that tests. Then I test Linq2Sql and see also a delay - compare it with Subsonic 2. My question here is, especial for the linq2sql, and the up-coming dotnet version 4, what else can I do to speed it up ? What else on linq2sql settings, or classes, not on this code that I have used for my messures I place here the project that I make the tests, also the screen shots of the results. How I make the tests - and the accurate of my measures. I use only for my question Google chrome, because its difficult for me to show here a lot of other measures that I have done with more complex programs. This is the most simple one, I just measure the Data Read. How can I prove that. I make a simple Thread.Sleep(10 seconds) and see if I see that 10 seconds on Google Chrome Measure, and yes I see it. here are more test with this Sleep thead to see whats actually Chrome gives. 10 seconds delay 100 ms delay Zero delay There is only a small 15ms thats get on messure, is so small compare it with the rest of my tests that I do not care about. So what I measure I measure just the data read via each method - did not count the data or database delay, or any disk read or anything like that. Later on the image with the result I show that no disk activity exist on the measures See this image to see what really I measure and if this is correct Why I chose this kind of test Its simple, it's real, and it's near my real problem that I found the delay of subsonic 3 in real program with real data. Now lets tests the dals Start by see this image I have 4-5 calls on every method, the one after the other. The results are. For a loop of 100 times, ask for 5 Rows, one not exist, approximatively.. Simple adonet:81ms SubSonic 2 :210ms linq2sql :1.70sec linq2sql using CompiledQuery.Compile :239ms Subsonic 3 :15.00sec (wow - extreme slow) The project http://www.planethost.gr/DalSpeedTests.rar Can any one confirm this benchmark, or make any optimizations to help me out ? Other tests Some one publish here this link http://ormbattle.net/ (and then remove it - don not know why) In this page you can find a really useful advanced tests for all, except subsonic 2 and subsonic 3 that I have here ! Optimizing What I really ask here is if some one can now any trick how to optimize the DALs, not by changing the test code, but by changing the code and the settings on each dal. For example... Optimizing Linq2SQL I start search how to optimize Linq2sql and found this article, and maybe more exist. Finally I make the tricks from that page to run, and optimize the code using them all. The speed was near 1.50sec from 1.70.... big improvement, but still slow. Then I found a different way - same idea article, and wow ! the speed is blow up. Using this trick with CompiledQuery.Compile, the time from 1.5sec is now 239ms. Here is the code for the precompiled... Func<DataClassesDataContext, int, IQueryable<Product>> compiledQuery = CompiledQuery.Compile((DataClassesDataContext meta, int IdToFind) => (from myData in meta.Products where myData.ProductID.Equals(IdToFind) select myData)); StringBuilder Test = new StringBuilder(); int[] MiaSeira = { 5, 6, 10, 100, 7 }; using (DataClassesDataContext context = new DataClassesDataContext()) { context.ObjectTrackingEnabled = false; for (int i = 0; i < 100; i++) { foreach (int EnaID in MiaSeira) { var oFindThat2P = compiledQuery(context, EnaID); foreach (Product One in oFindThat2P) { Test.Append("<br />"); Test.Append(One.ProductName); } } } } Optimizing SubSonic 3 and problems I make many performance profiling, and start change the one after the other and the speed is better but still too slow. I post them on subsonic group but they ignore the problem, they say that everything is fast... Here is some capture of my profiling and delay points inside subsonic source code I have end up that subsonic3 make more call on the structure of the database rather than on data itself. Needs to reconsider the hole way of asking for data, and follow the subsonic2 idea if this is possible. Try to make precompile to subsonic 3 like I did in linq2Sql but fail for the moment... Optimizing SubSonic 2 After I discover that subsonic 3 is extreme slow, I start my checks on subsonic 2 - that I have never done before believing that is fast. (and it is) So its come up with some points that can be faster. For example there are many loops like this ones that actually is slow because of string manipulation and compares inside the loop. I must say to you that this code called million of times ! on a period of few minutes ! of data asking from the program. On small amount of tables and small fields maybe this is not a big think for some people, but on large amount of tables, the delay is even more. So I decide and optimize the subsonic 2 by my self, by replacing the string compares, with number compares! Simple. I do that almost on every point that profiler say that is slow. I change also all small points that can be even a little faster, and disable some not so used thinks. The results, 5% faster on NorthWind database, near 20% faster on my database with 250 tables. That is count with 500ms less in 10 seconds process on northwind, 100ms faster on my database on 500ms process time. I do not have captures to show you for that because I have made them with different code, different time, and track them down on paper. Anyway this is my story and my question on all that, what else do you know to make them even faster... For this measures I have use Subsonic 2.2 optimized by me, Subsonic 3.0.0.3 a little optimized by me, and Dot.Net 3.5

    Read the article

  • Using jQuery validation plugin with tabbed navigation

    - by user3438917
    I have a tabbed navigation wizard, for which the first section needs to be validated before proceeding to the next tab. The validation should trigger when the user hits the "next" button. I am unable to get the validation to trigger though: <form id="target-group" novalidate="novalidate"> <div class="box"> <div class='box-header-main'><h2><img src="assets/img/list.png" /> Target Group Information</h2></div> <br /> <div class='box'> <div class='box-header-property'><h2><span data-bind="text:Name">New Target Group</span> | <i class='fa fa-file'></i></h2></div> <br /> <div class='row'> <div id='flight-wizard'> <div id='content' class='col-lg-12'> <div class='col-lg-12'> <div id='tabs'> <ul> <li id="targetgroup-info-tab"><a href='#tabs-1'><i class="fa fa-info-circle"></i>Target Group Info</a></li> <li id="zone-tab"><a href='#tabs-2'><i class="fa fa-map-marker"></i>Zones</a></li> </ul> <div id='tabs-1'> <div class='row'> <div class='col-xs-6'> <div class='form-group'> Name<sup>*</sup> <input id="selectError0" name="name" class='form-control col-xs-12' data-bind="value: asdf" placeholder='Enter Name ...' /> </div> <form class='form-horizontal'> <div class='form-group'> Product(s)<sup>*</sup> <div class='controls' id='products'> <select id='selectError3' class='form-control' data-bind="options:test, optionsText: 'Name', optionsValue : 'test', value: test, optionsCaption: 'Choose Product...'"></select> </div> </div> </form> </div> <!--RIGHT PANE--> <div class='col-xs-6'> <div class='form-group'> Platform<sup>*</sup> <div class='controls'> <select id="selectError2" class='form-control' data-bind="options:test, optionsText: 'Name', optionsValue: 'test', value : test, optionsCaption: 'Choose Platform...'"></select> </div> </div> <form class='form-horizontal'> <div class='form-group'> AdTypes(s)<sup>*</sup> <div class='controls' id='adtypes'> <select multiple="" id='adtypesselect' class='form-control' data-rel="chosen" data-bind="options:test, optionsText: 'Name', optionsValue : 'test', selectedOptions: test, optionsCaption: 'test...'"></select> </div> </div> </form> <button id="btn_cancel_large" class='btn btn-large btn-primary btn-round'><i class='fa fa-ban' /></i> Cancel</button> <button id="btn-next-large" class='btn btn-large btn-primary btn-round'>Next <i class='fa fa-arrow-circle-right'></i></button> </div> <!--end of right pane--> </div> </div> <div id='tabs-2'> <div class='row'> <div class='col-lg-12'> <div class='row'> <div class='col-lg-12'> <div id='zones_list' class='box-content'> <div id='add-new-targetgroupzone' class='add-new'><i class='fa fa-plus-circle'></i><a href='/#/inventory/targeting/' onclick="return false;">Add Zone</a></div> <table id="results" width="100%"> <thead> <tr> <th>Publisher</th> <th>Property</th> <th>Zone</th> <th>AdTypes</th> <th width='10%'>Quick&nbsp;Actions</th> </tr> </thead> </table> </div> </div> </div> </div> </div> <br /> <div class="btn_row"> <button id="btn_cancel_large2" class='btn btn-large btn-primary btn-round'><i class='fa fa-ban' /></i> Cancel</button> <button id="btn-submit-large" class='btn btn-large btn-primary btn-round'>Submit <i class='fa fa-arrow-circle-down'></i></button> </div> </div> </div> </div> </div> </div> </div> </div> </div> </form> <form id="zones-form" style="display: none;" novalidate="novalidate" class="slideup-form"> <div class="box"> <div class="box-header-panel"> <h2>Add Target Group Zone</h2> <div class="box-icon" id="zones-form-close"> <i class="fa fa-arrow-circle-down"></i> </div> </div> <div class="box-content clearfix"> <div class="box-content"> <table id="zones-list" width="100%"> <thead> <tr> <th>Publisher</th> <th>Property</th> <th>Zone</th> <th>AdTypes</th> <th width='10%'>Quick&nbsp;Actions</th> </tr> </thead> </table> </div> </div> </div> </div> </form> jQuery: $("#target-group").validate({ rules: { name: { required: true } }, messages: { name: "Name required", } }); $('#btn-next-large').click(function () { if ($('#target-group').valid()) $tabs.tabs('select', $(this).attr("rel")); });

    Read the article

  • JPA : Add and remove operations on lazily initialized collection behaviour ?

    - by Albert Kam
    Hello, im currently trying out JPA 2 and using Hibernate 3.6.x as the engine. I have an entity of ReceivingGood that contains a List of ReceivingGoodDetail, and has a bidirectional relation. Some related codes for each entity follows : ReceivingGood.java @OneToMany(mappedBy="receivingGood", targetEntity=ReceivingGoodDetail.class, fetch=FetchType.LAZY, cascade = CascadeType.ALL) private List<ReceivingGoodDetail> details = new ArrayList<ReceivingGoodDetail>(); public void addReceivingGoodDetail(ReceivingGoodDetail receivingGoodDetail) { receivingGoodDetail.setReceivingGood(this); } void internalAddReceivingGoodDetail(ReceivingGoodDetail receivingGoodDetail) { this.details.add(receivingGoodDetail); } public void removeReceivingGoodDetail(ReceivingGoodDetail receivingGoodDetail) { receivingGoodDetail.setReceivingGood(null); } void internalRemoveReceivingGoodDetail(ReceivingGoodDetail receivingGoodDetail) { this.details.remove(receivingGoodDetail); } @ManyToOne @JoinColumn(name = "receivinggood_id") private ReceivingGood receivingGood; ReceivingGoodDetail.java : public void setReceivingGood(ReceivingGood receivingGood) { if (this.receivingGood != null) { this.receivingGood.internalRemoveReceivingGoodDetail(this); } this.receivingGood = receivingGood; if (receivingGood != null) { receivingGood.internalAddReceivingGoodDetail(this); } } In my experiements with both of these entities, both adding the detail to the receivingGood's collection, and even removing the detail from the receivingGood's collection, will trigger a query to fill the collection before doing the add or remove. This assumption is based on my experiments that i will paste below. My concern is that : is it ok to do changes on only a little bit of records on the collection, and the engine has to query all of the details belonging to the collection ? What if the collection would have to be filled with 1000 records when i just want to edit a single record ? Here are my experiments with the output as the comment above each method : /* Hibernate: select receivingg0_.id as id9_14_, receivingg0_.creationDate as creation2_9_14_, ... too long Hibernate: select receivingg0_.id as id10_20_, receivingg0_.creationDate as creation2_10_20_, ... too long removing existing detail from lazy collection Hibernate: select details0_.receivinggood_id as receivi13_9_8_, details0_.id as id8_, details0_.id as id10_7_, details0_.creationDate as creation2_10_7_, details0_.modificationDate as modifica3_10_7_, details0_.usercreate_id as usercreate10_10_7_, details0_.usermodify_id as usermodify11_10_7_, details0_.version as version10_7_, details0_.buyQuantity as buyQuant5_10_7_, details0_.buyUnit as buyUnit10_7_, details0_.internalQuantity as internal7_10_7_, details0_.internalUnit as internal8_10_7_, details0_.product_id as product12_10_7_, details0_.receivinggood_id as receivi13_10_7_, details0_.supplierLotNumber as supplier9_10_7_, user1_.id as id2_0_, user1_.creationDate as creation2_2_0_, user1_.modificationDate as modifica3_2_0_, user1_.usercreate_id as usercreate6_2_0_, user1_.usermodify_id as usermodify7_2_0_, user1_.version as version2_0_, user1_.name as name2_0_, user2_.id as id2_1_, user2_.creationDate as creation2_2_1_, user2_.modificationDate as modifica3_2_1_, user2_.usercreate_id as usercreate6_2_1_, user2_.usermodify_id as usermodify7_2_1_, user2_.version as version2_1_, user2_.name as name2_1_, user3_.id as id2_2_, user3_.creationDate as creation2_2_2_, user3_.modificationDate as modifica3_2_2_, user3_.usercreate_id as usercreate6_2_2_, user3_.usermodify_id as usermodify7_2_2_, user3_.version as version2_2_, user3_.name as name2_2_, user4_.id as id2_3_, user4_.creationDate as creation2_2_3_, user4_.modificationDate as modifica3_2_3_, user4_.usercreate_id as usercreate6_2_3_, user4_.usermodify_id as usermodify7_2_3_, user4_.version as version2_3_, user4_.name as name2_3_, product5_.id as id0_4_, product5_.creationDate as creation2_0_4_, product5_.modificationDate as modifica3_0_4_, product5_.usercreate_id as usercreate7_0_4_, product5_.usermodify_id as usermodify8_0_4_, product5_.version as version0_4_, product5_.code as code0_4_, product5_.name as name0_4_, user6_.id as id2_5_, user6_.creationDate as creation2_2_5_, user6_.modificationDate as modifica3_2_5_, user6_.usercreate_id as usercreate6_2_5_, user6_.usermodify_id as usermodify7_2_5_, user6_.version as version2_5_, user6_.name as name2_5_, user7_.id as id2_6_, user7_.creationDate as creation2_2_6_, user7_.modificationDate as modifica3_2_6_, user7_.usercreate_id as usercreate6_2_6_, user7_.usermodify_id as usermodify7_2_6_, user7_.version as version2_6_, user7_.name as name2_6_ from ReceivingGoodDetail details0_ left outer join COMMON_USER user1_ on details0_.usercreate_id=user1_.id left outer join COMMON_USER user2_ on user1_.usercreate_id=user2_.id left outer join COMMON_USER user3_ on user2_.usermodify_id=user3_.id left outer join COMMON_USER user4_ on details0_.usermodify_id=user4_.id left outer join Product product5_ on details0_.product_id=product5_.id left outer join COMMON_USER user6_ on product5_.usercreate_id=user6_.id left outer join COMMON_USER user7_ on product5_.usermodify_id=user7_.id where details0_.receivinggood_id=? after removing try selecting the size : 4 after removing, now flushing Hibernate: update ReceivingGood set creationDate=?, modificationDate=?, usercreate_id=?, usermodify_id=?, version=?, purchaseorder_id=?, supplier_id=?, transactionDate=?, transactionNumber=?, transactionType=?, transactionYearMonth=?, warehouse_id=? where id=? and version=? Hibernate: update ReceivingGoodDetail set creationDate=?, modificationDate=?, usercreate_id=?, usermodify_id=?, version=?, buyQuantity=?, buyUnit=?, internalQuantity=?, internalUnit=?, product_id=?, receivinggood_id=?, supplierLotNumber=? where id=? and version=? detail size : 4 */ public void removeFromLazyCollection() { String headerId = "3b373f6a-9cd1-4c9c-9d46-240de37f6b0f"; ReceivingGood receivingGood = em.find(ReceivingGood.class, headerId); // get existing detail ReceivingGoodDetail detail = em.find(ReceivingGoodDetail.class, "323fb0e7-9bb2-48dc-bc07-5ff32f30e131"); detail.setInternalUnit("MCB"); System.out.println("removing existing detail from lazy collection"); receivingGood.removeReceivingGoodDetail(detail); System.out.println("after removing try selecting the size : " + receivingGood.getDetails().size()); System.out.println("after removing, now flushing"); em.flush(); System.out.println("detail size : " + receivingGood.getDetails().size()); } /* Hibernate: select receivingg0_.id as id9_14_, receivingg0_.creationDate as creation2_9_14_, ... too long Hibernate: select receivingg0_.id as id10_20_, receivingg0_.creationDate as creation2_10_20_, ... too long adding existing detail into lazy collection Hibernate: select details0_.receivinggood_id as receivi13_9_8_, details0_.id as id8_, details0_.id as id10_7_, details0_.creationDate as creation2_10_7_, details0_.modificationDate as modifica3_10_7_, details0_.usercreate_id as usercreate10_10_7_, details0_.usermodify_id as usermodify11_10_7_, details0_.version as version10_7_, details0_.buyQuantity as buyQuant5_10_7_, details0_.buyUnit as buyUnit10_7_, details0_.internalQuantity as internal7_10_7_, details0_.internalUnit as internal8_10_7_, details0_.product_id as product12_10_7_, details0_.receivinggood_id as receivi13_10_7_, details0_.supplierLotNumber as supplier9_10_7_, user1_.id as id2_0_, user1_.creationDate as creation2_2_0_, user1_.modificationDate as modifica3_2_0_, user1_.usercreate_id as usercreate6_2_0_, user1_.usermodify_id as usermodify7_2_0_, user1_.version as version2_0_, user1_.name as name2_0_, user2_.id as id2_1_, user2_.creationDate as creation2_2_1_, user2_.modificationDate as modifica3_2_1_, user2_.usercreate_id as usercreate6_2_1_, user2_.usermodify_id as usermodify7_2_1_, user2_.version as version2_1_, user2_.name as name2_1_, user3_.id as id2_2_, user3_.creationDate as creation2_2_2_, user3_.modificationDate as modifica3_2_2_, user3_.usercreate_id as usercreate6_2_2_, user3_.usermodify_id as usermodify7_2_2_, user3_.version as version2_2_, user3_.name as name2_2_, user4_.id as id2_3_, user4_.creationDate as creation2_2_3_, user4_.modificationDate as modifica3_2_3_, user4_.usercreate_id as usercreate6_2_3_, user4_.usermodify_id as usermodify7_2_3_, user4_.version as version2_3_, user4_.name as name2_3_, product5_.id as id0_4_, product5_.creationDate as creation2_0_4_, product5_.modificationDate as modifica3_0_4_, product5_.usercreate_id as usercreate7_0_4_, product5_.usermodify_id as usermodify8_0_4_, product5_.version as version0_4_, product5_.code as code0_4_, product5_.name as name0_4_, user6_.id as id2_5_, user6_.creationDate as creation2_2_5_, user6_.modificationDate as modifica3_2_5_, user6_.usercreate_id as usercreate6_2_5_, user6_.usermodify_id as usermodify7_2_5_, user6_.version as version2_5_, user6_.name as name2_5_, user7_.id as id2_6_, user7_.creationDate as creation2_2_6_, user7_.modificationDate as modifica3_2_6_, user7_.usercreate_id as usercreate6_2_6_, user7_.usermodify_id as usermodify7_2_6_, user7_.version as version2_6_, user7_.name as name2_6_ from ReceivingGoodDetail details0_ left outer join COMMON_USER user1_ on details0_.usercreate_id=user1_.id left outer join COMMON_USER user2_ on user1_.usercreate_id=user2_.id left outer join COMMON_USER user3_ on user2_.usermodify_id=user3_.id left outer join COMMON_USER user4_ on details0_.usermodify_id=user4_.id left outer join Product product5_ on details0_.product_id=product5_.id left outer join COMMON_USER user6_ on product5_.usercreate_id=user6_.id left outer join COMMON_USER user7_ on product5_.usermodify_id=user7_.id where details0_.receivinggood_id=? after adding try selecting the size : 5 after adding, now flushing Hibernate: update ReceivingGood set creationDate=?, modificationDate=?, usercreate_id=?, usermodify_id=?, version=?, purchaseorder_id=?, supplier_id=?, transactionDate=?, transactionNumber=?, transactionType=?, transactionYearMonth=?, warehouse_id=? where id=? and version=? detail size : 5 */ public void editLazyCollection() { String headerId = "3b373f6a-9cd1-4c9c-9d46-240de37f6b0f"; ReceivingGood receivingGood = em.find(ReceivingGood.class, headerId); // get existing detail ReceivingGoodDetail detail = em.find(ReceivingGoodDetail.class, "323fb0e7-9bb2-48dc-bc07-5ff32f30e131"); detail.setInternalUnit("MCB"); System.out.println("adding existing detail into lazy collection"); receivingGood.addReceivingGoodDetail(detail); System.out.println("after adding try selecting the size : " + receivingGood.getDetails().size()); System.out.println("after adding, now flushing"); em.flush(); System.out.println("detail size : " + receivingGood.getDetails().size()); } Please share your experience on this matter ! Thank you !

    Read the article

  • Using Handlebars.js issue

    - by Roland
    I'm having a small issue when I'm compiling a template with Handlebars.js . I have a JSON text file which contains an big array with objects : Source ; and I'm using XMLHTTPRequest to get it and then parse it so I can use it when compiling the template. So far the template has the following structure : <div class="product-listing-wrapper"> <div class="product-listing"> <div class="left-side-content"> <div class="thumb-wrapper"> <img src="{{ThumbnailUrl}}"> </div> <div class="google-maps-wrapper"> <div class="google-coordonates-wrapper"> <div class="google-coordonates"> <p>{{LatLon.Lat}}</p> <p>{{LatLon.Lon}}</p> </div> </div> <div class="google-maps-button"> <a class="google-maps" href="#" data-latitude="{{LatLon.Lat}}" data-longitude="{{LatLon.Lon}}">Google Maps</a> </div> </div> </div> <div class="right-side-content"></div> </div> And the following block of code would be the way I'm handling the JS part : $(document).ready(function() { /* Default Javascript Options ~a javascript object which contains all the variables that will be passed to the cluster class */ var default_cluster_options = { animations : ['flash', 'bounce', 'shake', 'tada', 'swing', 'wobble', 'wiggle', 'pulse', 'flip', 'flipInX', 'flipOutX', 'flipInY', 'flipOutY', 'fadeIn', 'fadeInUp', 'fadeInDown', 'fadeInLeft', 'fadeInRight', 'fadeInUpBig', 'fadeInDownBig', 'fadeInLeftBig', 'fadeInRightBig', 'fadeOut', 'fadeOutUp', 'fadeOutDown', 'fadeOutLeft', 'fadeOutRight', 'fadeOutUpBig', 'fadeOutDownBig', 'fadeOutLeftBig', 'fadeOutRightBig', 'bounceIn', 'bounceInUp', 'bounceInDown', 'bounceInLeft', 'bounceInRight', 'bounceOut', 'bounceOutUp', 'bounceOutDown', 'bounceOutLeft', 'bounceOutRight', 'rotateIn', 'rotateInDownLeft', 'rotateInDownRight', 'rotateInUpLeft', 'rotateInUpRight', 'rotateOut', 'rotateOutDownLeft', 'rotateOutDownRight', 'rotateOutUpLeft', 'rotateOutUpRight', 'lightSpeedIn', 'lightSpeedOut', 'hinge', 'rollIn', 'rollOut'], json_data_url : 'data.json', template_data_url : 'template.php', base_maps_api_url : 'https://maps.googleapis.com/maps/api/js?sensor=false', cluser_wrapper_id : '#content-wrapper', maps_wrapper_class : '.google-maps', }; /* Cluster ~main class, handles all javascript operations */ var Cluster = function(environment, cluster_options) { var self = this; this.options = $.extend({}, default_cluster_options, cluster_options); this.environment = environment; this.animations = this.options.animations; this.json_data_url = this.options.json_data_url; this.template_data_url = this.options.template_data_url; this.base_maps_api_url = this.options.base_maps_api_url; this.cluser_wrapper_id = this.options.cluser_wrapper_id; this.maps_wrapper_class = this.options.maps_wrapper_class; this.test_environment_mode(this.environment); this.initiate_environment(); this.test_xmlhttprequest_availability(); this.initiate_gmaps_lib_load(self.base_maps_api_url); this.initiate_data_processing(); }; /* Test Environment Mode ~adds a modernizr test which looks wheater the cluster class is initiated in development or not */ Cluster.prototype.test_environment_mode = function(environment) { var self = this; return Modernizr.addTest('test_environment', function() { return (typeof environment !== 'undefined' && environment !== null && environment === "Development") ? true : false; }); }; /* Test XMLHTTPRequest Availability ~adds a modernizr test which looks wheater the xmlhttprequest class is available or not in the browser, exception makes IE */ Cluster.prototype.test_xmlhttprequest_availability = function() { return Modernizr.addTest('test_xmlhttprequest', function() { return (typeof window.XMLHttpRequest === 'undefined' || window.XMLHttpRequest === null) ? true : false; }); }; /* Initiate Environment ~depending on what the modernizr test returns it puts LESS in the development mode or not */ Cluster.prototype.initiate_environment = function() { return (Modernizr.test_environment) ? (less.env = "development", less.watch()) : true; }; Cluster.prototype.initiate_gmaps_lib_load = function(lib_url) { return Modernizr.load(lib_url); }; /* Initiate XHR Request ~prototype function that creates an xmlhttprequest for processing json data from an separate json text file */ Cluster.prototype.initiate_xhr_request = function(url, mime_type) { var request, data; var self = this; (Modernizr.test_xmlhttprequest) ? request = new ActiveXObject('Microsoft.XMLHTTP') : request = new XMLHttpRequest(); request.onreadystatechange = function() { if(request.readyState == 4 && request.status == 200) { data = request.responseText; } }; request.open("GET", url, false); request.overrideMimeType(mime_type); request.send(); return data; }; Cluster.prototype.initiate_google_maps_action = function() { var self = this; return $(this.maps_wrapper_class).each(function(index, element) { return $(element).on('click', function(ev) { var html = $('<div id="map-canvas" class="map-canvas"></div>'); var latitude = $(element).attr('data-latitude'); var longitude = $(element).attr('data-longitude'); log("LAT : " + latitude); log("LON : " + longitude); $.lightbox(html, { "width": 900, "height": 250, "onOpen" : function() { } }); ev.preventDefault(); }); }); }; Cluster.prototype.initiate_data_processing = function() { var self = this; var json_data = JSON.parse(self.initiate_xhr_request(self.json_data_url, 'application/json; charset=ISO-8859-1')); var source_data = self.initiate_xhr_request(self.template_data_url, 'text/html'); var template = Handlebars.compile(source_data); for(var i = 0; i < json_data.length; i++ ) { var result = template(json_data[i]); $(result).appendTo(self.cluser_wrapper_id); } self.initiate_google_maps_action(); }; /* Cluster ~initiate the cluster class */ var cluster = new Cluster("Development"); }); My problem would be that I don't think I'm iterating the JSON object right or I'm using the template the wrong way because if you check this link : http://rolandgroza.com/labs/valtech/ ; you will see that there are some numbers there ( which represents latitude and longitude ) but they are all the same and if you take only a brief look at the JSON object each number is different. So what am I doing wrong that it makes the same number repeat ? Or what should I do to fix it ? I must notice that I've just started working with templates so I have little knowledge it.

    Read the article

  • Detect if 2 HTML fragments have identical hierarchical structure

    - by sergzach
    An example of fragments that have identical hierarchical structure: (1) <div> <span>It's a message</span> </div> (2) <div> <span class='bold'>This is a new text</span> </div> An example of fragments that have different structure: (1) <div> <span><b>It's a message</b></span> </div> (2) <div> <span>This is a new text</span> </div> So, fragments with a similar structure correspond to one hierarchical tree (the same tag names, the same hierarchical structure). How can I detect if 2 elements (html fragments) have the same structure simply with lxml? I have a function that does not work properly for some more difficult case (than the example): def _is_equal( el1, el2 ): # input: 2 elements with possible equal structure and tag names # e.g. root = lxml.html.fromstring( buf ) # el1 = root[ 0 ] # el2 = root[ 1 ] # move from top to bottom, compare elements result = False if el1.tag == el2.tag: # has no children if len( el1 ) == len( el2 ): if len( el1 ) == 0: return True else: # iterate one of them, for example el1 i = 0 for child1 in el1: child2 = el2[ i ] is_equal2 = _is_equal( child1, child2 ) if not is_equal2: return False return True else: return False else: return False The code fails to detect that 2 divs with class='tovar2' have an identical structure: <body> <div class="tovar2"> <h2 class="new"> <a href="http://modnyedeti-krsk.ru/magazin/product/333193003"> ?????? ?/? </a> </h2> <ul class="art"> <li> ???????: <span>1759</span> </li> </ul> <div> <div class="wrap" style="width:180px;"> <div class="new"> <img src="shop_files/new-t.png" alt=""> </div> <a class="highslide" href="http://modnyedeti-krsk.ru/d/459730/d/820.jpg" onclick="return hs.expand(this)"> <img src="shop_files/fr_5.gif" style="background:url(/d/459730/d/548470803_5.jpg) 50% 50% no-repeat scroll;" alt="?????? ?/?" height="160" width="180"> </a> </div> </div> <form action="" onsubmit="return addProductForm(17094601,333193003,3150.00,this,false);"> <ul class="bott "> <li class="price">????:<br> <span> <b> 3 150 </b> ???. </span> </li> <li class="amount">???-??:<br><input class="number" onclick="this.select()" value="1" name="product_amount" type="text"> </li> <li class="buy"><input value="" type="submit"> </li> </ul> </form> </div> <div class="tovar2"> <h2 class="new"> <a href="http://modnyedeti-krsk.ru/magazin/product/333124803">?????? ?/?</a> </h2> <ul class="art"> <li> ???????: <span>1759</span> </li> </ul> <div> <div class="wrap" style="width:180px;"> <div class="new"> <img src="shop_files/new-t.png" alt=""> </div> <a class="highslide" href="http://modnyedeti-krsk.ru/d/459730/d/820.jpg" onclick="return hs.expand(this)"> <img src="shop_files/fr_5.gif" style="background:url(/d/459730/d/548470803_5.jpg) 50% 50% no-repeat scroll;" alt="?????? ?/?" height="160" width="180"> </a> </div> </div> <form action="" onsubmit="return addProductForm(17094601,333124803,3150.00,this,false);"> <ul class="bott "> <li class="price">????:<br> <span> <b>3 150</b> ???. </span> </li> <li class="amount">???-??:<br><input class="number" onclick="this.select()" value="1" name="product_amount" type="text"> </li> <li class="buy"> <input value="" type="submit"> </li> </ul> </form> </div> </body>

    Read the article

  • All is working except if($_POST['submit']=='Update')

    - by user1319909
    I have a working registration and login system. I am trying to create a form where a user can add product registration info (via mysql update). I can't seem to get the db to actually update the fields. What am I missing here?!? <?php define('INCLUDE_CHECK',true); require 'connect.php'; require 'functions.php'; // Those two files can be included only if INCLUDE_CHECK is defined session_name('tzLogin'); // Starting the session session_set_cookie_params(2*7*24*60*60); // Making the cookie live for 2 weeks session_start(); if($_SESSION['id'] && !isset($_COOKIE['tzRemember']) && !$_SESSION['rememberMe']) { // If you are logged in, but you don't have the tzRemember cookie (browser restart) // and you have not checked the rememberMe checkbox: $_SESSION = array(); session_destroy(); // Destroy the session } if(isset($_GET['logoff'])) { $_SESSION = array(); session_destroy(); header("Location: index_login3.php"); exit; } if($_POST['submit']=='Login') { // Checking whether the Login form has been submitted $err = array(); // Will hold our errors if(!$_POST['username'] || !$_POST['password']) $err[] = 'All the fields must be filled in!'; if(!count($err)) { $_POST['username'] = mysql_real_escape_string($_POST['username']); $_POST['password'] = mysql_real_escape_string($_POST['password']); $_POST['rememberMe'] = (int)$_POST['rememberMe']; // Escaping all input data $row = mysql_fetch_assoc(mysql_query("SELECT * FROM electrix_users WHERE usr='{$_POST['username']}' AND pass='".md5($_POST['password'])."'")); if($row['usr']) { // If everything is OK login $_SESSION['usr']=$row['usr']; $_SESSION['id'] = $row['id']; $_SESSION['email'] = $row['email']; $_SESSION['first'] = $row['first']; $_SESSION['last'] = $row['last']; $_SESSION['address1'] = $row['address1']; $_SESSION['address2'] = $row['address2']; $_SESSION['city'] = $row['city']; $_SESSION['state'] = $row['state']; $_SESSION['zip'] = $row['zip']; $_SESSION['country'] = $row['country']; $_SESSION['product1'] = $row['product1']; $_SESSION['serial1'] = $row['serial1']; $_SESSION['product2'] = $row['product2']; $_SESSION['serial2'] = $row['serial2']; $_SESSION['product3'] = $row['product3']; $_SESSION['serial3'] = $row['serial3']; $_SESSION['rememberMe'] = $_POST['rememberMe']; // Store some data in the session setcookie('tzRemember',$_POST['rememberMe']); } else $err[]='Wrong username and/or password!'; } if($err) $_SESSION['msg']['login-err'] = implode('<br />',$err); // Save the error messages in the session header("Location: index_login3.php"); exit; } else if($_POST['submit']=='Register') { // If the Register form has been submitted $err = array(); if(strlen($_POST['username'])<4 || strlen($_POST['username'])>32) { $err[]='Your username must be between 3 and 32 characters!'; } if(preg_match('/[^a-z0-9\-\_\.]+/i',$_POST['username'])) { $err[]='Your username contains invalid characters!'; } if(!checkEmail($_POST['email'])) { $err[]='Your email is not valid!'; } if(!count($err)) { // If there are no errors $pass = substr(md5($_SERVER['REMOTE_ADDR'].microtime().rand(1,100000)),0,6); // Generate a random password $_POST['email'] = mysql_real_escape_string($_POST['email']); $_POST['username'] = mysql_real_escape_string($_POST['username']); $_POST['first'] = mysql_real_escape_string($_POST['first']); $_POST['last'] = mysql_real_escape_string($_POST['last']); $_POST['address1'] = mysql_real_escape_string($_POST['address1']); $_POST['address2'] = mysql_real_escape_string($_POST['address2']); $_POST['city'] = mysql_real_escape_string($_POST['city']); $_POST['state'] = mysql_real_escape_string($_POST['state']); $_POST['zip'] = mysql_real_escape_string($_POST['zip']); $_POST['country'] = mysql_real_escape_string($_POST['country']); // Escape the input data mysql_query(" INSERT INTO electrix_users(usr,pass,email,first,last,address1,address2,city,state,zip,country,regIP,dt) VALUES( '".$_POST['username']."', '".md5($pass)."', '".$_POST['email']."', '".$_POST['first']."', '".$_POST['last']."', '".$_POST['address1']."', '".$_POST['address2']."', '".$_POST['city']."', '".$_POST['state']."', '".$_POST['zip']."', '".$_POST['country']."', '".$_SERVER['REMOTE_ADDR']."', NOW() )"); if(mysql_affected_rows($link)==1) { send_mail( '[email protected]', $_POST['email'], 'Your New Electrix User Password', 'Thank you for registering at www.electrixpro.com. Your password is: '.$pass); $_SESSION['msg']['reg-success']='We sent you an email with your new password!'; } else $err[]='This username is already taken!'; } if(count($err)) { $_SESSION['msg']['reg-err'] = implode('<br />',$err); } header("Location: index_login3.php"); exit; } if($_POST['submit']=='Update') { { mysql_query(" UPDATE electrix_users(product1,serial1,product2,serial2,product3,serial3) WHERE usr='{$_POST['username']}' VALUES( '".$_POST['product1']."', '".$_POST['serial1']."', '".$_POST['product2']."', '".$_POST['serial2']."', '".$_POST['product3']."', '".$_POST['serial3']."', )"); if(mysql_affected_rows($link)==1) { $_SESSION['msg']['upd-success']='Thank you for registering your Electrix product'; } else $err[]='So Sad!'; } if(count($err)) { $_SESSION['msg']['upd-err'] = implode('<br />',$err); } header("Location: index_login3.php"); exit; } if($_SESSION['msg']) { // The script below shows the sliding panel on page load $script = ' <script type="text/javascript"> $(function(){ $("div#panel").show(); $("#toggle a").toggle(); }); </script>'; } ?>

    Read the article

  • New features of C# 4.0

    This article covers New features of C# 4.0. Article has been divided into below sections. Introduction. Dynamic Lookup. Named and Optional Arguments. Features for COM interop. Variance. Relationship with Visual Basic. Resources. Other interested readings… 22 New Features of Visual Studio 2008 for .NET Professionals 50 New Features of SQL Server 2008 IIS 7.0 New features Introduction It is now close to a year since Microsoft Visual C# 3.0 shipped as part of Visual Studio 2008. In the VS Managed Languages team we are hard at work on creating the next version of the language (with the unsurprising working title of C# 4.0), and this document is a first public description of the planned language features as we currently see them. Please be advised that all this is in early stages of production and is subject to change. Part of the reason for sharing our plans in public so early is precisely to get the kind of feedback that will cause us to improve the final product before it rolls out. Simultaneously with the publication of this whitepaper, a first public CTP (community technology preview) of Visual Studio 2010 is going out as a Virtual PC image for everyone to try. Please use it to play and experiment with the features, and let us know of any thoughts you have. We ask for your understanding and patience working with very early bits, where especially new or newly implemented features do not have the quality or stability of a final product. The aim of the CTP is not to give you a productive work environment but to give you the best possible impression of what we are working on for the next release. The CTP contains a number of walkthroughs, some of which highlight the new language features of C# 4.0. Those are excellent for getting a hands-on guided tour through the details of some common scenarios for the features. You may consider this whitepaper a companion document to these walkthroughs, complementing them with a focus on the overall language features and how they work, as opposed to the specifics of the concrete scenarios. C# 4.0 The major theme for C# 4.0 is dynamic programming. Increasingly, objects are “dynamic” in the sense that their structure and behavior is not captured by a static type, or at least not one that the compiler knows about when compiling your program. Some examples include a. objects from dynamic programming languages, such as Python or Ruby b. COM objects accessed through IDispatch c. ordinary .NET types accessed through reflection d. objects with changing structure, such as HTML DOM objects While C# remains a statically typed language, we aim to vastly improve the interaction with such objects. A secondary theme is co-evolution with Visual Basic. Going forward we will aim to maintain the individual character of each language, but at the same time important new features should be introduced in both languages at the same time. They should be differentiated more by style and feel than by feature set. The new features in C# 4.0 fall into four groups: Dynamic lookup Dynamic lookup allows you to write method, operator and indexer calls, property and field accesses, and even object invocations which bypass the C# static type checking and instead gets resolved at runtime. Named and optional parameters Parameters in C# can now be specified as optional by providing a default value for them in a member declaration. When the member is invoked, optional arguments can be omitted. Furthermore, any argument can be passed by parameter name instead of position. COM specific interop features Dynamic lookup as well as named and optional parameters both help making programming against COM less painful than today. On top of that, however, we are adding a number of other small features that further improve the interop experience. Variance It used to be that an IEnumerable<string> wasn’t an IEnumerable<object>. Now it is – C# embraces type safe “co-and contravariance” and common BCL types are updated to take advantage of that. Dynamic Lookup Dynamic lookup allows you a unified approach to invoking things dynamically. With dynamic lookup, when you have an object in your hand you do not need to worry about whether it comes from COM, IronPython, the HTML DOM or reflection; you just apply operations to it and leave it to the runtime to figure out what exactly those operations mean for that particular object. This affords you enormous flexibility, and can greatly simplify your code, but it does come with a significant drawback: Static typing is not maintained for these operations. A dynamic object is assumed at compile time to support any operation, and only at runtime will you get an error if it wasn’t so. Oftentimes this will be no loss, because the object wouldn’t have a static type anyway, in other cases it is a tradeoff between brevity and safety. In order to facilitate this tradeoff, it is a design goal of C# to allow you to opt in or opt out of dynamic behavior on every single call. The dynamic type C# 4.0 introduces a new static type called dynamic. When you have an object of type dynamic you can “do things to it” that are resolved only at runtime: dynamic d = GetDynamicObject(…); d.M(7); The C# compiler allows you to call a method with any name and any arguments on d because it is of type dynamic. At runtime the actual object that d refers to will be examined to determine what it means to “call M with an int” on it. The type dynamic can be thought of as a special version of the type object, which signals that the object can be used dynamically. It is easy to opt in or out of dynamic behavior: any object can be implicitly converted to dynamic, “suspending belief” until runtime. Conversely, there is an “assignment conversion” from dynamic to any other type, which allows implicit conversion in assignment-like constructs: dynamic d = 7; // implicit conversion int i = d; // assignment conversion Dynamic operations Not only method calls, but also field and property accesses, indexer and operator calls and even delegate invocations can be dispatched dynamically: dynamic d = GetDynamicObject(…); d.M(7); // calling methods d.f = d.P; // getting and settings fields and properties d[“one”] = d[“two”]; // getting and setting thorugh indexers int i = d + 3; // calling operators string s = d(5,7); // invoking as a delegate The role of the C# compiler here is simply to package up the necessary information about “what is being done to d”, so that the runtime can pick it up and determine what the exact meaning of it is given an actual object d. Think of it as deferring part of the compiler’s job to runtime. The result of any dynamic operation is itself of type dynamic. Runtime lookup At runtime a dynamic operation is dispatched according to the nature of its target object d: COM objects If d is a COM object, the operation is dispatched dynamically through COM IDispatch. This allows calling to COM types that don’t have a Primary Interop Assembly (PIA), and relying on COM features that don’t have a counterpart in C#, such as indexed properties and default properties. Dynamic objects If d implements the interface IDynamicObject d itself is asked to perform the operation. Thus by implementing IDynamicObject a type can completely redefine the meaning of dynamic operations. This is used intensively by dynamic languages such as IronPython and IronRuby to implement their own dynamic object models. It will also be used by APIs, e.g. by the HTML DOM to allow direct access to the object’s properties using property syntax. Plain objects Otherwise d is a standard .NET object, and the operation will be dispatched using reflection on its type and a C# “runtime binder” which implements C#’s lookup and overload resolution semantics at runtime. This is essentially a part of the C# compiler running as a runtime component to “finish the work” on dynamic operations that was deferred by the static compiler. Example Assume the following code: dynamic d1 = new Foo(); dynamic d2 = new Bar(); string s; d1.M(s, d2, 3, null); Because the receiver of the call to M is dynamic, the C# compiler does not try to resolve the meaning of the call. Instead it stashes away information for the runtime about the call. This information (often referred to as the “payload”) is essentially equivalent to: “Perform an instance method call of M with the following arguments: 1. a string 2. a dynamic 3. a literal int 3 4. a literal object null” At runtime, assume that the actual type Foo of d1 is not a COM type and does not implement IDynamicObject. In this case the C# runtime binder picks up to finish the overload resolution job based on runtime type information, proceeding as follows: 1. Reflection is used to obtain the actual runtime types of the two objects, d1 and d2, that did not have a static type (or rather had the static type dynamic). The result is Foo for d1 and Bar for d2. 2. Method lookup and overload resolution is performed on the type Foo with the call M(string,Bar,3,null) using ordinary C# semantics. 3. If the method is found it is invoked; otherwise a runtime exception is thrown. Overload resolution with dynamic arguments Even if the receiver of a method call is of a static type, overload resolution can still happen at runtime. This can happen if one or more of the arguments have the type dynamic: Foo foo = new Foo(); dynamic d = new Bar(); var result = foo.M(d); The C# runtime binder will choose between the statically known overloads of M on Foo, based on the runtime type of d, namely Bar. The result is again of type dynamic. The Dynamic Language Runtime An important component in the underlying implementation of dynamic lookup is the Dynamic Language Runtime (DLR), which is a new API in .NET 4.0. The DLR provides most of the infrastructure behind not only C# dynamic lookup but also the implementation of several dynamic programming languages on .NET, such as IronPython and IronRuby. Through this common infrastructure a high degree of interoperability is ensured, but just as importantly the DLR provides excellent caching mechanisms which serve to greatly enhance the efficiency of runtime dispatch. To the user of dynamic lookup in C#, the DLR is invisible except for the improved efficiency. However, if you want to implement your own dynamically dispatched objects, the IDynamicObject interface allows you to interoperate with the DLR and plug in your own behavior. This is a rather advanced task, which requires you to understand a good deal more about the inner workings of the DLR. For API writers, however, it can definitely be worth the trouble in order to vastly improve the usability of e.g. a library representing an inherently dynamic domain. Open issues There are a few limitations and things that might work differently than you would expect. · The DLR allows objects to be created from objects that represent classes. However, the current implementation of C# doesn’t have syntax to support this. · Dynamic lookup will not be able to find extension methods. Whether extension methods apply or not depends on the static context of the call (i.e. which using clauses occur), and this context information is not currently kept as part of the payload. · Anonymous functions (i.e. lambda expressions) cannot appear as arguments to a dynamic method call. The compiler cannot bind (i.e. “understand”) an anonymous function without knowing what type it is converted to. One consequence of these limitations is that you cannot easily use LINQ queries over dynamic objects: dynamic collection = …; var result = collection.Select(e => e + 5); If the Select method is an extension method, dynamic lookup will not find it. Even if it is an instance method, the above does not compile, because a lambda expression cannot be passed as an argument to a dynamic operation. There are no plans to address these limitations in C# 4.0. Named and Optional Arguments Named and optional parameters are really two distinct features, but are often useful together. Optional parameters allow you to omit arguments to member invocations, whereas named arguments is a way to provide an argument using the name of the corresponding parameter instead of relying on its position in the parameter list. Some APIs, most notably COM interfaces such as the Office automation APIs, are written specifically with named and optional parameters in mind. Up until now it has been very painful to call into these APIs from C#, with sometimes as many as thirty arguments having to be explicitly passed, most of which have reasonable default values and could be omitted. Even in APIs for .NET however you sometimes find yourself compelled to write many overloads of a method with different combinations of parameters, in order to provide maximum usability to the callers. Optional parameters are a useful alternative for these situations. Optional parameters A parameter is declared optional simply by providing a default value for it: public void M(int x, int y = 5, int z = 7); Here y and z are optional parameters and can be omitted in calls: M(1, 2, 3); // ordinary call of M M(1, 2); // omitting z – equivalent to M(1, 2, 7) M(1); // omitting both y and z – equivalent to M(1, 5, 7) Named and optional arguments C# 4.0 does not permit you to omit arguments between commas as in M(1,,3). This could lead to highly unreadable comma-counting code. Instead any argument can be passed by name. Thus if you want to omit only y from a call of M you can write: M(1, z: 3); // passing z by name or M(x: 1, z: 3); // passing both x and z by name or even M(z: 3, x: 1); // reversing the order of arguments All forms are equivalent, except that arguments are always evaluated in the order they appear, so in the last example the 3 is evaluated before the 1. Optional and named arguments can be used not only with methods but also with indexers and constructors. Overload resolution Named and optional arguments affect overload resolution, but the changes are relatively simple: A signature is applicable if all its parameters are either optional or have exactly one corresponding argument (by name or position) in the call which is convertible to the parameter type. Betterness rules on conversions are only applied for arguments that are explicitly given – omitted optional arguments are ignored for betterness purposes. If two signatures are equally good, one that does not omit optional parameters is preferred. M(string s, int i = 1); M(object o); M(int i, string s = “Hello”); M(int i); M(5); Given these overloads, we can see the working of the rules above. M(string,int) is not applicable because 5 doesn’t convert to string. M(int,string) is applicable because its second parameter is optional, and so, obviously are M(object) and M(int). M(int,string) and M(int) are both better than M(object) because the conversion from 5 to int is better than the conversion from 5 to object. Finally M(int) is better than M(int,string) because no optional arguments are omitted. Thus the method that gets called is M(int). Features for COM interop Dynamic lookup as well as named and optional parameters greatly improve the experience of interoperating with COM APIs such as the Office Automation APIs. In order to remove even more of the speed bumps, a couple of small COM-specific features are also added to C# 4.0. Dynamic import Many COM methods accept and return variant types, which are represented in the PIAs as object. In the vast majority of cases, a programmer calling these methods already knows the static type of a returned object from context, but explicitly has to perform a cast on the returned value to make use of that knowledge. These casts are so common that they constitute a major nuisance. In order to facilitate a smoother experience, you can now choose to import these COM APIs in such a way that variants are instead represented using the type dynamic. In other words, from your point of view, COM signatures now have occurrences of dynamic instead of object in them. This means that you can easily access members directly off a returned object, or you can assign it to a strongly typed local variable without having to cast. To illustrate, you can now say excel.Cells[1, 1].Value = "Hello"; instead of ((Excel.Range)excel.Cells[1, 1]).Value2 = "Hello"; and Excel.Range range = excel.Cells[1, 1]; instead of Excel.Range range = (Excel.Range)excel.Cells[1, 1]; Compiling without PIAs Primary Interop Assemblies are large .NET assemblies generated from COM interfaces to facilitate strongly typed interoperability. They provide great support at design time, where your experience of the interop is as good as if the types where really defined in .NET. However, at runtime these large assemblies can easily bloat your program, and also cause versioning issues because they are distributed independently of your application. The no-PIA feature allows you to continue to use PIAs at design time without having them around at runtime. Instead, the C# compiler will bake the small part of the PIA that a program actually uses directly into its assembly. At runtime the PIA does not have to be loaded. Omitting ref Because of a different programming model, many COM APIs contain a lot of reference parameters. Contrary to refs in C#, these are typically not meant to mutate a passed-in argument for the subsequent benefit of the caller, but are simply another way of passing value parameters. It therefore seems unreasonable that a C# programmer should have to create temporary variables for all such ref parameters and pass these by reference. Instead, specifically for COM methods, the C# compiler will allow you to pass arguments by value to such a method, and will automatically generate temporary variables to hold the passed-in values, subsequently discarding these when the call returns. In this way the caller sees value semantics, and will not experience any side effects, but the called method still gets a reference. Open issues A few COM interface features still are not surfaced in C#. Most notably these include indexed properties and default properties. As mentioned above these will be respected if you access COM dynamically, but statically typed C# code will still not recognize them. There are currently no plans to address these remaining speed bumps in C# 4.0. Variance An aspect of generics that often comes across as surprising is that the following is illegal: IList<string> strings = new List<string>(); IList<object> objects = strings; The second assignment is disallowed because strings does not have the same element type as objects. There is a perfectly good reason for this. If it were allowed you could write: objects[0] = 5; string s = strings[0]; Allowing an int to be inserted into a list of strings and subsequently extracted as a string. This would be a breach of type safety. However, there are certain interfaces where the above cannot occur, notably where there is no way to insert an object into the collection. Such an interface is IEnumerable<T>. If instead you say: IEnumerable<object> objects = strings; There is no way we can put the wrong kind of thing into strings through objects, because objects doesn’t have a method that takes an element in. Variance is about allowing assignments such as this in cases where it is safe. The result is that a lot of situations that were previously surprising now just work. Covariance In .NET 4.0 the IEnumerable<T> interface will be declared in the following way: public interface IEnumerable<out T> : IEnumerable { IEnumerator<T> GetEnumerator(); } public interface IEnumerator<out T> : IEnumerator { bool MoveNext(); T Current { get; } } The “out” in these declarations signifies that the T can only occur in output position in the interface – the compiler will complain otherwise. In return for this restriction, the interface becomes “covariant” in T, which means that an IEnumerable<A> is considered an IEnumerable<B> if A has a reference conversion to B. As a result, any sequence of strings is also e.g. a sequence of objects. This is useful e.g. in many LINQ methods. Using the declarations above: var result = strings.Union(objects); // succeeds with an IEnumerable<object> This would previously have been disallowed, and you would have had to to some cumbersome wrapping to get the two sequences to have the same element type. Contravariance Type parameters can also have an “in” modifier, restricting them to occur only in input positions. An example is IComparer<T>: public interface IComparer<in T> { public int Compare(T left, T right); } The somewhat baffling result is that an IComparer<object> can in fact be considered an IComparer<string>! It makes sense when you think about it: If a comparer can compare any two objects, it can certainly also compare two strings. This property is referred to as contravariance. A generic type can have both in and out modifiers on its type parameters, as is the case with the Func<…> delegate types: public delegate TResult Func<in TArg, out TResult>(TArg arg); Obviously the argument only ever comes in, and the result only ever comes out. Therefore a Func<object,string> can in fact be used as a Func<string,object>. Limitations Variant type parameters can only be declared on interfaces and delegate types, due to a restriction in the CLR. Variance only applies when there is a reference conversion between the type arguments. For instance, an IEnumerable<int> is not an IEnumerable<object> because the conversion from int to object is a boxing conversion, not a reference conversion. Also please note that the CTP does not contain the new versions of the .NET types mentioned above. In order to experiment with variance you have to declare your own variant interfaces and delegate types. COM Example Here is a larger Office automation example that shows many of the new C# features in action. using System; using System.Diagnostics; using System.Linq; using Excel = Microsoft.Office.Interop.Excel; using Word = Microsoft.Office.Interop.Word; class Program { static void Main(string[] args) { var excel = new Excel.Application(); excel.Visible = true; excel.Workbooks.Add(); // optional arguments omitted excel.Cells[1, 1].Value = "Process Name"; // no casts; Value dynamically excel.Cells[1, 2].Value = "Memory Usage"; // accessed var processes = Process.GetProcesses() .OrderByDescending(p =&gt; p.WorkingSet) .Take(10); int i = 2; foreach (var p in processes) { excel.Cells[i, 1].Value = p.ProcessName; // no casts excel.Cells[i, 2].Value = p.WorkingSet; // no casts i++; } Excel.Range range = excel.Cells[1, 1]; // no casts Excel.Chart chart = excel.ActiveWorkbook.Charts. Add(After: excel.ActiveSheet); // named and optional arguments chart.ChartWizard( Source: range.CurrentRegion, Title: "Memory Usage in " + Environment.MachineName); //named+optional chart.ChartStyle = 45; chart.CopyPicture(Excel.XlPictureAppearance.xlScreen, Excel.XlCopyPictureFormat.xlBitmap, Excel.XlPictureAppearance.xlScreen); var word = new Word.Application(); word.Visible = true; word.Documents.Add(); // optional arguments word.Selection.Paste(); } } The code is much more terse and readable than the C# 3.0 counterpart. Note especially how the Value property is accessed dynamically. This is actually an indexed property, i.e. a property that takes an argument; something which C# does not understand. However the argument is optional. Since the access is dynamic, it goes through the runtime COM binder which knows to substitute the default value and call the indexed property. Thus, dynamic COM allows you to avoid accesses to the puzzling Value2 property of Excel ranges. Relationship with Visual Basic A number of the features introduced to C# 4.0 already exist or will be introduced in some form or other in Visual Basic: · Late binding in VB is similar in many ways to dynamic lookup in C#, and can be expected to make more use of the DLR in the future, leading to further parity with C#. · Named and optional arguments have been part of Visual Basic for a long time, and the C# version of the feature is explicitly engineered with maximal VB interoperability in mind. · NoPIA and variance are both being introduced to VB and C# at the same time. VB in turn is adding a number of features that have hitherto been a mainstay of C#. As a result future versions of C# and VB will have much better feature parity, for the benefit of everyone. Resources All available resources concerning C# 4.0 can be accessed through the C# Dev Center. Specifically, this white paper and other resources can be found at the Code Gallery site. Enjoy! span.fullpost {display:none;}

    Read the article

  • Red Gate Coder interviews: Alex Davies

    - by Michael Williamson
    Alex Davies has been a software engineer at Red Gate since graduating from university, and is currently busy working on .NET Demon. We talked about tackling parallel programming with his actors framework, a scientific approach to debugging, and how JavaScript is going to affect the programming languages we use in years to come. So, if we start at the start, how did you get started in programming? When I was seven or eight, I was given a BBC Micro for Christmas. I had asked for a Game Boy, but my dad thought it would be better to give me a proper computer. For a year or so, I only played games on it, but then I found the user guide for writing programs in it. I gradually started doing more stuff on it and found it fun. I liked creating. As I went into senior school I continued to write stuff on there, trying to write games that weren’t very good. I got a real computer when I was fourteen and found ways to write BASIC on it. Visual Basic to start with, and then something more interesting than that. How did you learn to program? Was there someone helping you out? Absolutely not! I learnt out of a book, or by experimenting. I remember the first time I found a loop, I was like “Oh my God! I don’t have to write out the same line over and over and over again any more. It’s amazing!” When did you think this might be something that you actually wanted to do as a career? For a long time, I thought it wasn’t something that you would do as a career, because it was too much fun to be a career. I thought I’d do chemistry at university and some kind of career based on chemical engineering. And then I went to a careers fair at school when I was seventeen or eighteen, and it just didn’t interest me whatsoever. I thought “I could be a programmer, and there’s loads of money there, and I’m good at it, and it’s fun”, but also that I shouldn’t spoil my hobby. Now I don’t really program in my spare time any more, which is a bit of a shame, but I program all the rest of the time, so I can live with it. Do you think you learnt much about programming at university? Yes, definitely! I went into university knowing how to make computers do anything I wanted them to do. However, I didn’t have the language to talk about algorithms, so the algorithms course in my first year was massively important. Learning other language paradigms like functional programming was really good for breadth of understanding. Functional programming influences normal programming through design rather than actually using it all the time. I draw inspiration from it to write imperative programs which I think is actually becoming really fashionable now, but I’ve been doing it for ages. I did it first! There were also some courses on really odd programming languages, a bit of Prolog, a little bit of C. Having a little bit of each of those is something that I would have never done on my own, so it was important. And then there are knowledge-based courses which are about not programming itself but things that have been programmed like TCP. Those are really important for examples for how to approach things. Did you do any internships while you were at university? Yeah, I spent both of my summers at the same company. I thought I could code well before I went there. Looking back at the crap that I produced, it was only surpassed in its crappiness by all of the other code already in that company. I’m so much better at writing nice code now than I used to be back then. Was there just not a culture of looking after your code? There was, they just didn’t hire people for their abilities in that area. They hired people for raw IQ. The first indicator of it going wrong was that they didn’t have any computer scientists, which is a bit odd in a programming company. But even beyond that they didn’t have people who learnt architecture from anyone else. Most of them had started straight out of university, so never really had experience or mentors to learn from. There wasn’t the experience to draw from to teach each other. In the second half of my second internship, I was being given tasks like looking at new technologies and teaching people stuff. Interns shouldn’t be teaching people how to do their jobs! All interns are going to have little nuggets of things that you don’t know about, but they shouldn’t consistently be the ones who know the most. It’s not a good environment to learn. I was going to ask how you found working with people who were more experienced than you… When I reached Red Gate, I found some people who were more experienced programmers than me, and that was difficult. I’ve been coding since I was tiny. At university there were people who were cleverer than me, but there weren’t very many who were more experienced programmers than me. During my internship, I didn’t find anyone who I classed as being a noticeably more experienced programmer than me. So, it was a shock to the system to have valid criticisms rather than just formatting criticisms. However, Red Gate’s not so big on the actual code review, at least it wasn’t when I started. We did an entire product release and then somebody looked over all of the UI of that product which I’d written and say what they didn’t like. By that point, it was way too late and I’d disagree with them. Do you think the lack of code reviews was a bad thing? I think if there’s going to be any oversight of new people, then it should be continuous rather than chunky. For me I don’t mind too much, I could go out and get oversight if I wanted it, and in those situations I felt comfortable without it. If I was managing the new person, then maybe I’d be keener on oversight and then the right way to do it is continuously and in very, very small chunks. Have you had any significant projects you’ve worked on outside of a job? When I was a teenager I wrote all sorts of stuff. I used to write games, I derived how to do isomorphic projections myself once. I didn’t know what the word was so I couldn’t Google for it, so I worked it out myself. It was horrifically complicated. But it sort of tailed off when I started at university, and is now basically zero. If I do side-projects now, they tend to be work-related side projects like my actors framework, NAct, which I started in a down tools week. Could you explain a little more about NAct? It is a little C# framework for writing parallel code more easily. Parallel programming is difficult when you need to write to shared data. Sometimes parallel programming is easy because you don’t need to write to shared data. When you do need to access shared data, you could just have your threads pile in and do their work, but then you would screw up the data because the threads would trample on each other’s toes. You could lock, but locks are really dangerous if you’re using more than one of them. You get interactions like deadlocks, and that’s just nasty. Actors instead allows you to say this piece of data belongs to this thread of execution, and nobody else can read it. If you want to read it, then ask that thread of execution for a piece of it by sending a message, and it will send the data back by a message. And that avoids deadlocks as long as you follow some obvious rules about not making your actors sit around waiting for other actors to do something. There are lots of ways to write actors, NAct allows you to do it as if it was method calls on other objects, which means you get all the strong type-safety that C# programmers like. Do you think that this is suitable for the majority of parallel programming, or do you think it’s only suitable for specific cases? It’s suitable for most difficult parallel programming. If you’ve just got a hundred web requests which are all independent of each other, then I wouldn’t bother because it’s easier to just spin them up in separate threads and they can proceed independently of each other. But where you’ve got difficult parallel programming, where you’ve got multiple threads accessing multiple bits of data in multiple ways at different times, then actors is at least as good as all other ways, and is, I reckon, easier to think about. When you’re using actors, you presumably still have to write your code in a different way from you would otherwise using single-threaded code. You can’t use actors with any methods that have return types, because you’re not allowed to call into another actor and wait for it. If you want to get a piece of data out of another actor, then you’ve got to use tasks so that you can use “async” and “await” to await asynchronously for it. But other than that, you can still stick things in classes so it’s not too different really. Rather than having thousands of objects with mutable state, you can use component-orientated design, where there are only a few mutable classes which each have a small number of instances. Then there can be thousands of immutable objects. If you tend to do that anyway, then actors isn’t much of a jump. If I’ve already built my system without any parallelism, how hard is it to add actors to exploit all eight cores on my desktop? Usually pretty easy. If you can identify even one boundary where things look like messages and you have components where some objects live on one side and these other objects live on the other side, then you can have a granddaddy object on one side be an actor and it will parallelise as it goes across that boundary. Not too difficult. If we do get 1000-core desktop PCs, do you think actors will scale up? It’s hard. There are always in the order of twenty to fifty actors in my whole program because I tend to write each component as actors, and I tend to have one instance of each component. So this won’t scale to a thousand cores. What you can do is write data structures out of actors. I use dictionaries all over the place, and if you need a dictionary that is going to be accessed concurrently, then you could build one of those out of actors in no time. You can use queuing to marshal requests between different slices of the dictionary which are living on different threads. So it’s like a distributed hash table but all of the chunks of it are on the same machine. That means that each of these thousand processors has cached one small piece of the dictionary. I reckon it wouldn’t be too big a leap to start doing proper parallelism. Do you think it helps if actors get baked into the language, similarly to Erlang? Erlang is excellent in that it has thread-local garbage collection. C# doesn’t, so there’s a limit to how well C# actors can possibly scale because there’s a single garbage collected heap shared between all of them. When you do a global garbage collection, you’ve got to stop all of the actors, which is seriously expensive, whereas in Erlang garbage collections happen per-actor, so they’re insanely cheap. However, Erlang deviated from all the sensible language design that people have used recently and has just come up with crazy stuff. You can definitely retrofit thread-local garbage collection to .NET, and then it’s quite well-suited to support actors, even if it’s not baked into the language. Speaking of language design, do you have a favourite programming language? I’ll choose a language which I’ve never written before. I like the idea of Scala. It sounds like C#, only with some of the niggles gone. I enjoy writing static types. It means you don’t have to writing tests so much. When you say it doesn’t have some of the niggles? C# doesn’t allow the use of a property as a method group. It doesn’t have Scala case classes, or sum types, where you can do a switch statement and the compiler checks that you’ve checked all the cases, which is really useful in functional-style programming. Pattern-matching, in other words. That’s actually the major niggle. C# is pretty good, and I’m quite happy with C#. And what about going even further with the type system to remove the need for tests to something like Haskell? Or is that a step too far? I’m quite a pragmatist, I don’t think I could deal with trying to write big systems in languages with too few other users, especially when learning how to structure things. I just don’t know anyone who can teach me, and the Internet won’t teach me. That’s the main reason I wouldn’t use it. If I turned up at a company that writes big systems in Haskell, I would have no objection to that, but I wouldn’t instigate it. What about things in C#? For instance, there’s contracts in C#, so you can try to statically verify a bit more about your code. Do you think that’s useful, or just not worthwhile? I’ve not really tried it. My hunch is that it needs to be built into the language and be quite mathematical for it to work in real life, and that doesn’t seem to have ended up true for C# contracts. I don’t think anyone who’s tried them thinks they’re any good. I might be wrong. On a slightly different note, how do you like to debug code? I think I’m quite an odd debugger. I use guesswork extremely rarely, especially if something seems quite difficult to debug. I’ve been bitten spending hours and hours on guesswork and not being scientific about debugging in the past, so now I’m scientific to a fault. What I want is to see the bug happening in the debugger, to step through the bug happening. To watch the program going from a valid state to an invalid state. When there’s a bug and I can’t work out why it’s happening, I try to find some piece of evidence which places the bug in one section of the code. From that experiment, I binary chop on the possible causes of the bug. I suppose that means binary chopping on places in the code, or binary chopping on a stage through a processing cycle. Basically, I’m very stupid about how I debug. I won’t make any guesses, I won’t use any intuition, I will only identify the experiment that’s going to binary chop most effectively and repeat rather than trying to guess anything. I suppose it’s quite top-down. Is most of the time then spent in the debugger? Absolutely, if at all possible I will never debug using print statements or logs. I don’t really hold much stock in outputting logs. If there’s any bug which can be reproduced locally, I’d rather do it in the debugger than outputting logs. And with SmartAssembly error reporting, there’s not a lot that can’t be either observed in an error report and just fixed, or reproduced locally. And in those other situations, maybe I’ll use logs. But I hate using logs. You stare at the log, trying to guess what’s going on, and that’s exactly what I don’t like doing. You have to just look at it and see does this look right or wrong. We’ve covered how you get to grip with bugs. How do you get to grips with an entire codebase? I watch it in the debugger. I find little bugs and then try to fix them, and mostly do it by watching them in the debugger and gradually getting an understanding of how the code works using my process of binary chopping. I have to do a lot of reading and watching code to choose where my slicing-in-half experiment is going to be. The last time I did it was SmartAssembly. The old code was a complete mess, but at least it did things top to bottom. There wasn’t too much of some of the big abstractions where flow of control goes all over the place, into a base class and back again. Code’s really hard to understand when that happens. So I like to choose a little bug and try to fix it, and choose a bigger bug and try to fix it. Definitely learn by doing. I want to always have an aim so that I get a little achievement after every few hours of debugging. Once I’ve learnt the codebase I might be able to fix all the bugs in an hour, but I’d rather be using them as an aim while I’m learning the codebase. If I was a maintainer of a codebase, what should I do to make it as easy as possible for you to understand? Keep distinct concepts in different places. And name your stuff so that it’s obvious which concepts live there. You shouldn’t have some variable that gets set miles up the top of somewhere, and then is read miles down to choose some later behaviour. I’m talking from a very much SmartAssembly point of view because the old SmartAssembly codebase had tons and tons of these things, where it would read some property of the code and then deal with it later. Just thousands of variables in scope. Loads of things to think about. If you can keep concepts separate, then it aids me in my process of fixing bugs one at a time, because each bug is going to more or less be understandable in the one place where it is. And what about tests? Do you think they help at all? I’ve never had the opportunity to learn a codebase which has had tests, I don’t know what it’s like! What about when you’re actually developing? How useful do you find tests in finding bugs or regressions? Finding regressions, absolutely. Running bits of code that would be quite hard to run otherwise, definitely. It doesn’t happen very often that a test finds a bug in the first place. I don’t really buy nebulous promises like tests being a good way to think about the spec of the code. My thinking goes something like “This code works at the moment, great, ship it! Ah, there’s a way that this code doesn’t work. Okay, write a test, demonstrate that it doesn’t work, fix it, use the test to demonstrate that it’s now fixed, and keep the test for future regressions.” The most valuable tests are for bugs that have actually happened at some point, because bugs that have actually happened at some point, despite the fact that you think you’ve fixed them, are way more likely to appear again than new bugs are. Does that mean that when you write your code the first time, there are no tests? Often. The chance of there being a bug in a new feature is relatively unaffected by whether I’ve written a test for that new feature because I’m not good enough at writing tests to think of bugs that I would have written into the code. So not writing regression tests for all of your code hasn’t affected you too badly? There are different kinds of features. Some of them just always work, and are just not flaky, they just continue working whatever you throw at them. Maybe because the type-checker is particularly effective around them. Writing tests for those features which just tend to always work is a waste of time. And because it’s a waste of time I’ll tend to wait until a feature has demonstrated its flakiness by having bugs in it before I start trying to test it. You can get a feel for whether it’s going to be flaky code as you’re writing it. I try to write it to make it not flaky, but there are some things that are just inherently flaky. And very occasionally, I’ll think “this is going to be flaky” as I’m writing, and then maybe do a test, but not most of the time. How do you think your programming style has changed over time? I’ve got clearer about what the right way of doing things is. I used to flip-flop a lot between different ideas. Five years ago I came up with some really good ideas and some really terrible ideas. All of them seemed great when I thought of them, but they were quite diverse ideas, whereas now I have a smaller set of reliable ideas that are actually good for structuring code. So my code is probably more similar to itself than it used to be back in the day, when I was trying stuff out. I’ve got more disciplined about encapsulation, I think. There are operational things like I use actors more now than I used to, and that forces me to use immutability more than I used to. The first code that I wrote in Red Gate was the memory profiler UI, and that was an actor, I just didn’t know the name of it at the time. I don’t really use object-orientation. By object-orientation, I mean having n objects of the same type which are mutable. I want a constant number of objects that are mutable, and they should be different types. I stick stuff in dictionaries and then have one thing that owns the dictionary and puts stuff in and out of it. That’s definitely a pattern that I’ve seen recently. I think maybe I’m doing functional programming. Possibly. It’s plausible. If you had to summarise the essence of programming in a pithy sentence, how would you do it? Programming is the form of art that, without losing any of the beauty of architecture or fine art, allows you to produce things that people love and you make money from. So you think it’s an art rather than a science? It’s a little bit of engineering, a smidgeon of maths, but it’s not science. Like architecture, programming is on that boundary between art and engineering. If you want to do it really nicely, it’s mostly art. You can get away with doing architecture and programming entirely by having a good engineering mind, but you’re not going to produce anything nice. You’re not going to have joy doing it if you’re an engineering mind. Architects who are just engineering minds are not going to enjoy their job. I suppose engineering is the foundation on which you build the art. Exactly. How do you think programming is going to change over the next ten years? There will be an unfortunate shift towards dynamically-typed languages, because of JavaScript. JavaScript has an unfair advantage. JavaScript’s unfair advantage will cause more people to be exposed to dynamically-typed languages, which means other dynamically-typed languages crop up and the best features go into dynamically-typed languages. Then people conflate the good features with the fact that it’s dynamically-typed, and more investment goes into dynamically-typed languages. They end up better, so people use them. What about the idea of compiling other languages, possibly statically-typed, to JavaScript? It’s a reasonable idea. I would like to do it, but I don’t think enough people in the world are going to do it to make it pick up. The hordes of beginners are the lifeblood of a language community. They are what makes there be good tools and what makes there be vibrant community websites. And any particular thing which is the same as JavaScript only with extra stuff added to it, although it might be technically great, is not going to have the hordes of beginners. JavaScript is always to be quickest and easiest way for a beginner to start programming in the browser. And dynamically-typed languages are great for beginners. Compilers are pretty scary and beginners don’t write big code. And having your errors come up in the same place, whether they’re statically checkable errors or not, is quite nice for a beginner. If someone asked me to teach them some programming, I’d teach them JavaScript. If dynamically-typed languages are great for beginners, when do you think the benefits of static typing start to kick in? The value of having a statically typed program is in the tools that rely on the static types to produce a smooth IDE experience rather than actually telling me my compile errors. And only once you’re experienced enough a programmer that having a really smooth IDE experience makes a blind bit of difference, does static typing make a blind bit of difference. So it’s not really about size of codebase. If I go and write up a tiny program, I’m still going to get value out of writing it in C# using ReSharper because I’m experienced with C# and ReSharper enough to be able to write code five times faster if I have that help. Any other visions of the future? Nobody’s going to use actors. Because everyone’s going to be running on single-core VMs connected over network-ready protocols like JSON over HTTP. So, parallelism within one operating system is going to die. But until then, you should use actors. More Red Gater Coder interviews

    Read the article

  • Why should you choose Oracle WebLogic 12c instead of JBoss EAP 6?

    - by Ricardo Ferreira
    In this post, I will cover some technical differences between Oracle WebLogic 12c and JBoss EAP 6, which was released a couple days ago from Red Hat. This article claims to help you in the evaluation of key points that you should consider when choosing for an Java EE application server. In the following sections, I will present to you some important aspects that most customers ask us when they are seriously evaluating for an middleware infrastructure, specially if you are considering JBoss for some reason. I would suggest that you keep the following question in mind while you are reading the points: "Why should I choose JBoss instead of WebLogic?" 1) Multi Datacenter Deployment and Clustering - D/R ("Disaster & Recovery") architecture support is embedded on the WebLogic Server 12c product. JBoss EAP 6 on the other hand has no direct D/R support included, Red Hat relies on third-part tools with higher prices. When you consider a middleware solution to host your business critical application, you should worry with every architectural aspect that are related with the solution. Fail-over support is one little aspect of a truly reliable solution. If you do not worry about D/R, your solution will not be reliable. Having said that, with Red Hat and JBoss EAP 6, you have this extra cost that will increase considerably the total cost of ownership of the solution. As we commonly hear from analysts, open-source are not so cheaper when you start seeing the big picture. - WebLogic Server 12c supports advanced LAN clustering, detection of death servers and have a common alert framework. JBoss EAP 6 on the other hand has limited LAN clustering support with no server death detection. They do not generate any alerts when servers goes down (only if you buy JBoss ON which is a separated technology, but until now does not support JBoss EAP 6) and manual intervention are required when servers goes down. In most cases, admin people must rely on "kill -9", "tail -f someFile.log" and "ps ax | grep java" commands to manage failures and clustering anomalies. - WebLogic Server 12c supports the concept of Node Manager, which is a separated process that runs on the physical | virtual servers that allows extend the administration of the cluster to WebLogic managed servers that are often distributed across multiple machines and geographic locations. JBoss EAP 6 on the other hand has no equivalent technology. Whole server instances must be managed individually. - WebLogic Server 12c Node Manager supports Coherence to boost performance when managing servers. JBoss EAP 6 on the other hand has no similar technology. There is no way to coordinate JBoss and infiniband instances provided by JBoss using high throughput and low latency protocols like InfiniBand. The Node Manager feature also allows another very important feature that JBoss EAP lacks: secure the administration. When using WebLogic Node Manager, all the administration tasks are sent to the managed servers in a secure tunel protected by a certificate, which means that the transport layer that separates the WebLogic administration console from the managed servers are secured by SSL. - WebLogic Server 12c are now integrated with OTD ("Oracle Traffic Director") which is a web server technology derived from the former Sun iPlanet Web Server. This software complements the web server support offered by OHS ("Oracle HTTP Server"). Using OTD, WebLogic instances are load-balanced by a high powerful software that knows how to handle SDP ("Socket Direct Protocol") over InfiniBand, which boost performance when used with engineered systems technologies like Oracle Exalogic Elastic Cloud. JBoss EAP 6 on the other hand only offers support to Apache Web Server with custom modules created to deal with JBoss clusters, but only across standard TCP/IP networks.  2) Application and Runtime Diagnostics - WebLogic Server 12c have diagnostics capabilities embedded on the server called WLDF ("WebLogic Diagnostic Framework") so there is no need to rely on third-part tools. JBoss EAP 6 on the other hand has no diagnostics capabilities. Their only diagnostics tool is the log generated by the application server. Admin people are encouraged to analyse thousands of log lines to find out what is going on. - WebLogic Server 12c complement WLDF with JRockit MC ("Mission Control"), which provides to administrators and developers a complete insight about the JVM performance, behavior and possible bottlenecks. WebLogic Server 12c also have an classloader analysis tool embedded, and even a log analyzer tool that enables administrators and developers to view logs of multiple servers at the same time. JBoss EAP 6 on the other hand relies on third-part tools to do something similar. Again, only log searching are offered to find out whats going on. - WebLogic Server 12c offers end-to-end traceability and monitoring available through Oracle EM ("Enterprise Manager"), including monitoring of business transactions that flows through web servers, ESBs, application servers and database servers, all of this with high deep JVM analysis and diagnostics. JBoss EAP 6 on the other hand, even using JBoss ON ("Operations Network"), which is a separated technology, does not support those features. Red Hat relies on third-part tools to provide direct Oracle database traceability across JVMs. One of those tools are Oracle EM for non-Oracle middleware that manage JBoss, Tomcat, Websphere and IIS transparently. - WebLogic Server 12c with their JRockit support offers a tool called JRockit Flight Recorder, which can give developers a complete visibility of a certain period of application production monitoring with zero extra overhead. This automatic recording allows you to deep analyse threads latency, memory leaks, thread contention, resource utilization, stack overflow damages and GC ("Garbage Collection") cycles, to observe in real time stop-the-world phenomenons, generational, reference count and parallel collects and mutator threads analysis. JBoss EAP 6 don't even dream to support something similar, even because they don't have their own JVM. 3) Application Server Administration - WebLogic Server 12c offers a complete administration console complemented with scripting and macro-like recording capabilities. A single WebLogic console can managed up to hundreds of WebLogic servers belonging to the same domain. JBoss EAP 6 on the other hand has a limited console and provides a XML centric administration. JBoss, after ten years, started the development of a rudimentary centralized administration that still leave a lot of administration tasks aside, so admin people and developers must touch scripts and XML configuration files for most advanced and even simple administration tasks. This lead applications to error prone and risky deployments. Even using JBoss ON, JBoss EAP are not able to offer decent administration features for admin people which must be high skilled in JBoss internal architecture and its managing capabilities. - Oracle EM is available to manage multiple domains, databases, application servers, operating systems and virtualization, with a complete end-to-end visibility. JBoss ON does not provide management capabilities across the complete architecture, only basic monitoring. Even deployment must be done aside JBoss ON which does no integrate well with others softwares than JBoss. Until now, JBoss ON does not supports JBoss EAP 6, so even their minimal support for JBoss are not available for JBoss EAP 6 leaving customers uncovered and subject to high skilled JBoss admin people. - WebLogic Server 12c has the same administration model whatever is the topology selected by the customer. JBoss EAP 6 on the other hand differentiates between two operational models: standalone-mode and domain-mode, that are not consistent with each other. Depending on the mode used, the administration skill is different. - WebLogic Server 12c has no point-of-failures processes, and it does not need to define any specialized server. Domain model in WebLogic is available for years (at least ten years or more) and is production proven. JBoss EAP 6 on the other hand needs special processes to garantee JBoss integrity, the PC ("Process-Controller") and the HC ("Host-Controller"). Different from WebLogic, the domain model in JBoss is quite new (one year at tops) of maturity, and need to mature considerably until start doing things like WebLogic domain model does. - WebLogic Server 12c supports parallel deployment model which enables some artifacts being deployed at the same time. JBoss EAP 6 on the other hand does not have any similar feature. Every deployment are done atomically in the containers. This means that if you have a huge EAR (an EAR of 120 MB of size for instance) and deploy onto JBoss EAP 6, this EAR will take some minutes in order to starting accept thread requests. The same EAR deployed onto WebLogic Server 12c will reduce the deployment time at least in 2X compared to JBoss. 4) Support and Upgrades - WebLogic Server 12c has patch management available. JBoss EAP 6 on the other hand has no patch management available, each JBoss EAP instance should be patched manually. To achieve such feature, you need to buy a separated technology called JBoss ON ("Operations Network") that manage this type of stuff. But until now, JBoss ON does not support JBoss EAP 6 so, in practice, JBoss EAP 6 does not have this feature. - WebLogic Server 12c supports previuous WebLogic domains without any reconfiguration since its kernel is robust and mature since its creation in 1995. JBoss EAP 6 on the other hand has a proven lack of supportability between JBoss AS 4, 5, 6 and 7. Different kernels and messaging engines were implemented in JBoss stack in the last five years reveling their incapacity to create a well architected and proven middleware technology. - WebLogic Server 12c has patch prescription based on customer configuration. JBoss EAP 6 on the other hand has no such capability. People need to create ticket supports and have their installations revised by Red Hat support guys to gain some patch prescription from them. - Oracle WebLogic Server independent of the version has 8 years of support of new patches and has lifetime release of existing patches beyond that. JBoss EAP 6 on the other hand provides patches for a specific application server version up to 5 years after the release date. JBoss EAP 4 and previous versions had only 4 years. A good question that Red Hat will argue to answer is: "what happens when you find issues after year 5"?  5) RAC ("Real Application Clusters") Support - WebLogic Server 12c ships with a specific JDBC driver to leverage Oracle RAC clustering capabilities (Fast-Application-Notification, Transaction Affinity, Fast-Connection-Failover, etc). Oracle JDBC thin driver are also available. JBoss EAP 6 on the other hand ships only the standard Oracle JDBC thin driver. Load balancing with Oracle RAC are not supported. Manual intervention in case of planned or unplanned RAC downtime are necessary. In JBoss EAP 6, situation does not reestablish automatically after downtime. - WebLogic Server 12c has a feature called Active GridLink for Oracle RAC which provides up to 3X performance on OLTP applications. This seamless integration between WebLogic and Oracle database enable more value added to critical business applications leveraging their investments in Oracle database technology and Oracle middleware. JBoss EAP 6 on the other hand has no performance gains at all, even when admin people implement some kind of connection-pooling tuning. - WebLogic Server 12c also supports transaction and web session affinity to the Oracle RAC, which provides aditional gains of performance. This is particularly interesting if you are creating a reliable solution that are distributed not only in an LAN cluster, but into a different data center. JBoss EAP 6 on the other hand has no such support. 6) Standards and Technology Support - WebLogic Server 12c is fully Java EE 6 compatible and production ready since december of 2011. JBoss EAP 6 on the other hand became fully compatible with Java EE 6 only in the community version after three months, and production ready only in a few days considering that this article was written in June of 2012. Red Hat says that they are the masters of innovation and technology proliferation, but compared with Oracle and even other proprietary vendors like IBM, they historically speaking are lazy to deliver the most newest technologies and standards adherence. - Oracle is the steward of Java, driving innovation into the platform from commercial and open-source vendors. Red Hat on the other hand does not have its own JVM and relies on third-part JVMs to complete their application server offer. 95% of Red Hat customers are using Oracle HotSpot as JVM, which means that without Oracle involvement, their support are limited exclusively to the application server layer and we all know that most problems are happens in the JVM layer. - WebLogic Server 12c supports natively JDK 7, which empower developers to explore the maximum of the Java platform productivity when writing code. This feature differentiate WebLogic from others application servers (except GlassFish that are also managed by Oracle) because the usage of JDK 7 introduce such remarkable productivity features like the "try-with-resources" enhancement, catching multiple exceptions with one try block, Strings in the switch statements, JVM improvements in terms of JDBC, I/O, networking, security, concurrency and of course, the most important feature of Java 7: native support for multiple non-Java languages. More features regarding JDK 7 can be found here. JBoss EAP 6 on the other hand does not support JDK 7 officially, they comment in their community version that "Java SE 7 can be used with JBoss 7" which does not gives you any guarantees of enterprise support for JDK 7. - Oracle WebLogic Server 12c supports integration with Spring framework allowing Spring applications to use WebLogic special transaction manager, exposing bean interfaces to WebLogic MBeans to take advantage of all WebLogic monitoring and administration advantages. JBoss EAP 6 on the other hand has no special integration with Spring. In fact, Red Hat offers a suspicious package called "JBoss Web Platform" that in theory supports Spring, but in practice this package does not offers any special integration. It is just a facility for Red Hat customers to have support from both JBoss and Spring technology using the same customer support. 7) Lightweight Development - Oracle WebLogic Server 12c and Oracle GlassFish are completely integrated and can share applications without any modifications. Starting with the 12c version, WebLogic now understands natively GlassFish deployment descriptors and specific configurations in order to offer you a truly and reliable migration path from a community Java EE application server to a enterprise middleware product like WebLogic. JBoss EAP 6 on the other hand has no support to natively reuse an existing (or still in development) application from JBoss AS community server. Users of JBoss suffer of critical issues during deployment time that includes: changing the libraries and dependencies of the application, patching the DTD or XSD deployment descriptors, refactoring of the application layers due classloading issues and anomalies, rebuilding of persistence, business and web layers due issues with "usage of the certified version of an certain dependency" or "frameworks that Red Hat potentially does not recommend" etc. If you have the culture or enterprise IT directive of developing Java EE applications using community middleware to in a certain future, transition to enterprise (supported by a vendor) middleware, Oracle WebLogic plus Oracle GlassFish offers you a more sustainable solution. - WebLogic Server 12c has a very light ZIP distribution (less than 165 MB). JBoss EAP 6 ZIP size is around 130 MB, together with JBoss ON you have more 100 MB resulting in a higher download footprint. This is particularly interesting if you plan to use automated setup of application server instances (for example, to rapidly setup a development or staging environment) using Maven or Hudson. - WebLogic Server 12c has a complete integration with Maven allowing developers to setup WebLogic domains with few commands. Tasks like downloading WebLogic, installation, domain creation, data sources deployment are completely integrated. JBoss EAP 6 on the other hand has a limited offer integration with those tools.  - WebLogic Server 12c has a startup mode called WLX that turns-off EJB, JMS and JCA containers leaving enabled only the web container with Java EE 6 web profile. JBoss EAP 6 on the other hand has no such feature, you need to disable manually the containers that you do not want to use. - WebLogic Server 12c supports fastswap, which enables you to change classes without redeployment. This is particularly interesting if you are developing patches for the application that is already deployed and you do not want to redeploy the entire application. This is the same behavior that most application servers offers to JSP pages, but with WebLogic Server 12c, you have the same feature for Java classes in general. JBoss EAP 6 on the other hand has no such support. Even JBoss EAP 5 does not support this until now. 8) JMS and Messaging - WebLogic Server 12c has a proven and high scalable JMS implementation since its initial release in 1995. JBoss EAP 6 on the other hand has a still immature technology called HornetQ, which was introduced in JBoss EAP 5 replacing everything that was implemented in the previous versions. Red Hat loves to introduce new technologies across JBoss versions, playing around with customers and their investments. And when they are asked about why they have changed the implementation and caused such a mess, their answer is always: "the previous implementation was inadequate and not aligned with the community strategy so we are creating a new a improved one". This Red Hat practice leads to uncomfortable investments that in a near future (sometimes less than a year) will be affected in someway. - WebLogic Server 12c has troubleshooting and monitoring features included on the WebLogic console and WLDF. JBoss EAP 6 on the other hand has no direct monitoring on the console, activity is reflected only on the logs, no debug logs available in case of JMS issues. - WebLogic Server 12c has extremely good performance and scalability. JBoss EAP 6 on the other hand has a JMS storage mechanism relying on Oracle database or MySQL. This means that if an issue in production happens and Red Hat affirms that an performance issue is happening due to database problems, they will not support you on the performance issue. They will orient you to call Oracle instead. - WebLogic Server 12c supports messaging enterprise features like SAF ("Store and Forward"), Distributed Queues/Topics and Foreign JMS providers support that leverage JMS implementations without compromise developer code making things completely transparent. JBoss EAP 6 on the other hand do not even dream to support such features. 9) Caching and Grid - Coherence, which is the leading and most mature data grid technology from Oracle, is available since early 2000 and was integrated with WebLogic in 2009. Coherence and WebLogic clusters can be both managed from WebLogic administrative console. Even Node Manager supports Coherence. JBoss on the other hand discontinued JBoss Cache, which was their caching implementation just like they did with the messaging implementation (JBossMQ) which was a issue for long term customers. JBoss EAP 6 ships InfiniSpan version 1.0 which is immature and lack a proven record of successful cases and reliability. - WebLogic Server 12c has a feature called ActiveCache which uses Coherence to, without any code changes, replicate HTTP sessions from both WebLogic and other application servers like JBoss, Tomcat, Websphere, GlassFish and even Microsoft IIS. JBoss EAP 6 on the other hand does have such support and even when they do in the future, they probably will support only their own application server. - Coherence can be used to manage both L1 and L2 cache levels, providing support to Oracle TopLink and others JPA compliant implementations, even Hibernate. JBoss EAP 6 and Infinispan on the other hand supports only Hibernate. And most important of all: Infinispan does not have any successful case of L1 or L2 caching level support using Hibernate, which lead us to reflect about its viability. 10) Performance - WebLogic Server 12c is certified with Oracle Exalogic Elastic Cloud and can run unchanged applications at this engineered system. This approach can benefit customers from Exalogic optimization's of both kernel and JVM layers to boost performance in terms of 10X for web, OLTP, JMS and grid applications. JBoss EAP 6 on the other hand has no investment on engineered systems: customers do not have the choice to deploy on a Java ultra fast system if their project becomes relevant and performance issues are detected. - WebLogic Server 12c maintains a performance gain across each new release: starting on WebLogic 5.1, the overall performance gain has been close to 4X, which close to a 20% gain release by release. JBoss on the other hand does not provide SPECJAppServer or SPECJEnterprise performance benchmarks. Their so called "performance gains" remains hidden in their customer environments, which lead us to think if it is true or not since we will never get access to those environments. - WebLogic Server 12c has industry performance benchmarks with submissions across platforms and configurations leading SPECJ. Oracle WebLogic leads SPECJAppServer performance in multiple categories, fitting all customer topologies like: dual-node, single-node, multi-node and multi-node with RAC. JBoss... again, does not provide any SPECJAppServer performance benchmarks. - WebLogic Server 12c has a feature called work manager which allows your application to embrace new performance levels based on critical resource utilization of the CPUs usage. Work managers prioritizes work and allocates threads based on an execution model that takes into account administrator-defined parameters and actual run-time performance and throughput. JBoss EAP 6 on the other hand has no compared feature and probably they never will. Not supporting such feature like work managers, JBoss EAP 6 forces admin people and specially developers to uncover performance gains in a intrusive way, rewriting the code and doing performance refactorings. 11) Professional Services Support - WebLogic Server 12c and any other technology sold by Oracle give customers the possibility of hire OCS ("Oracle Consulting Services") to manage critical scenarios, deployment assistance of new applications, high skilled consultancy of architecture, best practices and people allocation together with customer teams. All OCS services are available without any restrictions, having the customer bought software from Oracle or just starting their implementation before any acquisition. JBoss EAP 6 or Red Hat to be more specifically, only offers professional services if you buy subscriptions from them. If you are developing a new critical application for your business and need the help of Red Hat for a serious issue or architecture decision, they will probably say: "OK... I can help you but after you buy subscriptions from me". Red Hat also does not allows their professional services consultants to manage environments that uses community based software. They will probably force you to first buy a subscription, download their "enterprise" version and them, optionally hire their consultants. - Oracle provides you our university to educate your team into our technologies, including of course specialized trainings of WebLogic application server. At any time and location, you can hire Oracle to train your team so you get trustful knowledge according to your specific needs. Certifications for the products are also available if your technical people desire to differentiate themselves as professionals. Red Hat on the other hand have a limited pool of resources to train your team in their technologies. Basically they are selling training and certification for RHEL ("Red Hat Enterprise Linux") but if you demand more specialized training in JBoss middleware, they will probably connect you to some "certified" partner localized training since they are apparently discontinuing their education center, at least here in Brazil. They were not able to reproduce their success with RHEL education to their middleware division since they need first sell the subscriptions to after gives you specialized training. And again, they only offer you specialized training based on their enterprise version (EAP in the case of JBoss) which means that the courses will be a quite outdated. There are reports of developers that took official training's from Red Hat at this year (2012) and in a certain JBoss advanced course, Red Hat supposedly covered JBossMQ as the messaging subsystem, and even the printed material provided was based on JBossMQ since the training was created for JBoss EAP 4.3. 12) Encouraging Transparency without Ulterior Motives - WebLogic Server 12c like any other software from Oracle can be downloaded any time from anywhere, you should only possess an OTN ("Oracle Technology Network") credential and you can download any enterprise software how many times you want. And is not some kind of "trial" version. It is the official binaries that will be running for ever in your data center. Oracle does not encourages the usage of "specific versions" of our software. The binaries you buy from Oracle are the same binaries anyone in the world could download and use for testing and personal education. JBoss EAP 6 on the other hand are not available for download unless you buy a subscription and get access to the Red Hat enterprise repositories. If you need to test, learn or just start creating your application using Red Hat's middleware software, you should download it from the community website. You are not allowed to download the enterprise version that, according to Red Hat are more secure, reliable and robust. But no one of us want to start the development of a software with an unsecured, unreliable and not scalable middleware right? So what you do? You are "invited" by Red Hat to buy subscriptions from them to get access to the "cool" version of the software. - WebLogic Server 12c prices are publicly available in the Oracle website. If you want to know right now how much WebLogic will cost to your organization, just click here and get access to our price list. In the case of WebLogic, check out the "US Oracle Technology Commercial Price List". Oracle also encourages you to get in touch with a sales representative to discuss discounts that would make possible the investment into our technology. But you are not required to do this, only if you are interested in buying our technology or maybe you want to discuss some discount scenarios. JBoss EAP 6 on the other hand does not have its cost publicly available in Red Hat's website or in any other media, at least is not so easy to get such information. The only link you will possibly find in their website is a "Contact a Sales Representative" link. This is not a very good relationship between an customer and an vendor. This is not an example of transparency, mainly when the software are sold as open. In this situations, customers expects to see the software prices publicly available, so they can have the chance to decide, based on the existing features of the software, if the cost is fair or not. Conclusion Oracle WebLogic is the most mature, secure, reliable and scalable Java EE application server of the market, and have a proven record of success around the globe to prove it's majority. Don't lose the chance to discover today how WebLogic could fit your needs and sustain your global IT middleware strategy, no matter if your strategy are completely based on the Cloud or not.

    Read the article

  • Pass data to Child Window in Silverlight 4 using MVVM

    - by Archie
    Hello, I have a datagrid with master detail implementation as follows: <data:DataGrid x:Name="dgData" Width="600" ItemsSource="{Binding Path=ItemCollection}" HorizontalScrollBarVisibility="Hidden" CanUserSortColumns="False" RowDetailsVisibilityChanged="dgData_RowDetailsVisibilityChanged"> <data:DataGrid.Columns> <data:DataGridTextColumn Header="Item" Width="*" Binding="{Binding Item,Mode=TwoWay}"/> <data:DataGridTextColumn Header="Company" Width="*" Binding="{Binding Company,Mode=TwoWay}"/> </data:DataGrid.Columns> <data:DataGrid.RowDetailsTemplate> <DataTemplate> <data:DataGrid x:Name="dgrdRowDetail" Width="400" AutoGenerateColumns="False" HorizontalAlignment="Center" HorizontalScrollBarVisibility="Hidden" Grid.Row="1"> <data:DataGrid.Columns> <data:DataGridTextColumn Header="Date" Width="*" Binding="{Binding Date,Mode=TwoWay}"/> <data:DataGridTextColumn Header="Price" Width="*" Binding="{Binding Price, Mode=TwoWay}"/> <data:DataGridTemplateColumn> <data:DataGridTemplateColumn.CellTemplate> <DataTemplate> <Button Content="Show More Details" Click="buttonShowDetail_Click"></Button> </DataTemplate> </data:DataGridTemplateColumn.CellTemplate> </data:DataGridTemplateColumn> </data:DataGrid.Columns> </data:DataGrid> </DataTemplate> </data:DataGrid.RowDetailsTemplate> </data:DataGrid> I want to open a Child window in clicking the button which shows more details about the product. I'm using MVVM pattern. My Model contains a method which takes the Item name as input and retursn the Details data. My problem is how should I pass the Item to ViewModel which will get the Details data from Model? and where shoukd I open the new Child Window? In View or ViewModel? Please help.Thanks.

    Read the article

  • solution for RPC_E_ATTEMPTED_MULTITHREAD error caused by SPRequestContext caching SPSites?

    - by kerray
    Hi, I'm developing a solution for SharePoint 2007, and I'm using SPSecurity.RunWithElevatedPrivileges a lot, passing in UserToken of the SystemAccount. After reading http://hristopavlov.wordpress.com/2009/01/19/understanding-sharepoint-sprequest/ I finally began to understand why I get these System.Runtime.InteropServices.COMException (0x80010102): Attempted to make calls on more than one thread in single threaded mode. (Exception from HRESULT: 0x80010102 (RPC_E_ATTEMPTED_MULTITHREAD)) errors, but there seems to be no solution - "known issue in the product" The article is more then a year old. I wasn't able to find anything more recent and helpful, but I was hoping maybe someone else has? My code goes like this SPSecurity.RunWithElevatedPrivileges(delegate() { using (SPSite elevatedSite = new SPSite(web.Site.ID, web.Site.SystemAccount.UserToken)) { using (SPWeb elevatedWeb = elevatedSite.OpenWeb(web.ID)) { // some operations on lists and items obtained through elevatedWeb } } } The errors come up wherever such an elevated code is used, and more often when there are more users who use these functionalities, so I guess perhaps the elevated SPSite is getting cached and reused. Is there any way to solve this? If my understanding is correct, how to make Sharepoint forget about the cached SPSites, and use a fresh one instead? Thanks

    Read the article

  • jquery parseFloat assigning val to field

    - by user306472
    I have a select box that gives a description of a product along with a price. Depending on what the user selects, I'd like to automatically grab that dollar amount from the option selected and assign it to a price input field. My HTML: <tr> <td> <select class="selector"> <option value="Item One $500">Item One $500</option> <option value="Item Two $400">Item Two $400</option> </select> </td> <td> <input type="text" class="price"></input> </td> </tr> So based on what is selected, I want either 500 or 400 assigned to the .class input. I tried this but I'm not quite sure where I'm going wrong: $('.selector').blur(function(){ var selectVal = ('.selector > option.val()'); var parsedPrice = parseFloat(selectVal.val()); $('.price').val(parsedPrice); });

    Read the article

  • Autoresize UITextView and UIScrollView

    - by dennis3484
    Hello guys, I already did several searches on Stack Overflow and Google, but I couldn't find a solution to my problem. I have a Detail View for a product that includes a UIScrollView and a few subviews (UIImageView, UILabel, UITextView). You can find an example here. First I wanna autoresize the UITextView (not the text itself!) to the corresponding height. Then I wanna autoresize the entire UIScrollView so that it fits the height of my UITextView and the elements above it. I've tried the following code: myUITextView.frame = CGRectMake(2.0, 98.0, 316.0, self.view.bounds.size.height); [scrollView setContentOffset:CGPointMake(0.0, 0.0) animated:NO]; scrollView.contentSize = CGSizeMake(320.0, 98.0 + [myUITextView frame].size.height); [self.view addSubview:scrollView]; 98.0 + [myUITextView frame].size.height) because my thought was: After getting the height of myUITextView, add the height of the other subviews (98.0) to it. Unfortunately it doesn't work very well. Depending on the content of the UIScrollView, it is too short or too long. I did some logging with the result that the height of the UITextView is always the same: 2010-01-27 14:15:45.096 myApp[1362:207] [myUITextView frame].size.height: 367.000000 Thanks!

    Read the article

  • WatiN SelectList Methods - Page not refreshing/actions not being fired after interacting with a sele

    - by Chad M
    Preface: If you don't care about the preface, skip down to the section marked "Question." Hi, Recently my company has upgraded to the latest version of WatiN for its test automation framework. We upgraded to avoid a problem where interacting with a select list would cause an ACCSES DENIED error. This error seems to be a product of the fact that our web application reloads the page it is on (which sits in a frame which sits in a frameset) with new fields after certain select lists options are selected. It could also be that our framework, which wraps around WatiN, often does actions on the same SelectList after the page refresh (I'm still looking into this, I'm new to the framework). The new version of WatiN does solve the ACCESS DENIED error, but also seems to stop select lists from firing the action that causes the page to reload w/ its new options. In fact, if you use WatiN to make the selection, the select list won't work correctly, even if manually interacted with, until the page has been forced to refresh. Question: When selecting an option in a SelectList using the newest WatiN code, the event that causes our web app's page to reload with new fields/values does not execute. What are some possibilities that could cause this? The term i've seen used most often to describe the refreshing that occurs when our select lists are used is "double post-back". Many thanks, Chad

    Read the article

< Previous Page | 302 303 304 305 306 307 308 309 310 311 312 313  | Next Page >