Search Results

Search found 18191 results on 728 pages for 'single board'.

Page 374/728 | < Previous Page | 370 371 372 373 374 375 376 377 378 379 380 381  | Next Page >

  • Split DC role from existing Exchange 2007 server

    - by Graeme Donaldson
    We currently have a single Exchange 2007 Server on Windows Server 2008. It's also a DC and I'd like to split the DC role to a different box. Is this doable without migrating the mailboxes off to a temporary box, re-installing and migrating back? I.e. can I just demote the server without breaking Exchange completely? I know this was quite painful with Server 2003/Exchange 2003, so I'm trying to get an idea of how much different the process is for Server 2008/Exchange 2007.

    Read the article

  • /etc/resolv.conf nameserver fd00::1

    - by user88631
    My /etc/resolv.conf constantly get a mysterious entry, i run a home network with ipv6 provided by ravd, the interface is auto-configured by Network manager (all name server lookups are lost when this line is first in my /etc/resolv.conf) . Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8) **# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN** nameserver fd00::1 nameserver 192.168.1.1 search home.int When ping is working cat /etc/resolv.conf # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8) # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN nameserver 192.168.1.1 search home.int So something is putting fd00::1 at start of file, not if I ping6 fd00::1 I get Destination unreachable: Administratively prohibited To diagnose this I ran the router with single cable to connected to ubuntu machine. Ran tcpdump + restarted network on ubuntu. "tcpdump ip6 -e -i eth0 | grep fd00" finds nothing, it's not being advertised via the network.. The only hit I got was when an upstream router refused a connection attempt from the ubuntu machine to fd00::1. I have also switched on debug for network manager & it appears to set the mystery line.. 15:22:14 storage-pc NetworkManager[349]: <info> Activation (eth0) Stage 5 of 5 (IPv4 Commit) complete. 15:22:14 storage-pc NetworkManager[349]: <warn> dnsmasq exited with error: Other problem (5) 15:22:14 storage-pc NetworkManager[349]: <debug> [1346822534.281528] [nm-dns-manager.c:598] update_dns(): updating resolv.conf 15:22:14 storage-pc NetworkManager[349]: <debug> [1346822534.281875] [nm-dns-manager.c:719] update_dns(): DNS: plugin dnsmasq ignored (caching disabled) 15:22:14 storage-pc NetworkManager[349]: <info> ((null)): writing resolv.conf to /sbin/resolvconf 15:22:14 storage-pc dbus[2184]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher' 15:22:14 storage-pc dnsmasq[2875]: reading /etc/resolv.conf 15:22:14 storage-pc dnsmasq[2875]: using nameserver 192.168.1.1#53 15:22:14 storage-pc dnsmasq[2875]: using nameserver fd00::1#53 Any suggestions on how to find out where this comes from?

    Read the article

  • Importing Outlook 2007 rules error

    - by Alex
    I'm trying to move an Outlook 2007 account (POP3, no Exchange) to a new machine and I'm having trouble importing the rules from the old machine to the new one. Here is the deal, I imported the .pst file on the new machine but when I try to import the rules, every single one of them brakes. The folders and sub-folders hierarchy is preserved upon the import of the .pst but the rules don't point to the right folder in the respective rule. Instead it points to "the specified folder". Same OS (Windows XP), same mail client (Outlook 2007) and the .psf file is about 8 GB. Any help i greatly appreciated.

    Read the article

  • Different background color for multiple filetypes in vim

    - by puk
    Is it possible to have different background colors in vim (ie. one light, one dark) when dealing with files with multiple filetypes (ie. :set ft=html.php)? In PHP code with HTML embedded, it can be difficult to see one single PHP statement amongst dozens of HTML lines, see below. I'll settle for anything, be it different bg color, a marker in the margin, a second left margin (one vim plugin does this for marks), maybe highlighting the <?php tag for example (although that's less than ideal) EDIT: I don't think this is possible at the syntax level as the syntax appears to use a limited number of elements (String, Function, Identifier...). This is no doubt to allow for the easy integration with colorschemes. SyntaxAttr is a good plugin to demonstrate this. Put it over any part of the code and it will tell you what syntax group it belongs to.

    Read the article

  • SQLAuthority News – Download Whitepaper – Power View Infrastructure Configuration and Installation: Step-by-Step and Scripts

    - by pinaldave
    Power View, a feature of SQL Server 2012 Reporting Services Add-in for Microsoft SharePoint Server 2010 Enterprise Edition, is an interactive data exploration, visualization, and presentation experience. It provides intuitive ad-hoc reporting for business users such as data analysts, business decision makers, and information workers. Microsoft has recently released very interesting whitepaper which covers a sample scenario that validates the connectivity of the Power View reports to both PowerPivot workbooks and tabular models. This white paper talks about following important concepts about Power View: Understanding the hardware and software requirements and their download locations Installing and configuring the required infrastructure when Power View and its data models are on the same computer and on different computer Installing and configuring a computer used for client access to Power View reports, models, Sharepoint 2012 and Power View in a workgroup Configuring single sign-on access for double-hop scenarios with and without Kerberos You can download the whitepaper from here. This whitepaper talks about many interesting scenarios. It would be really interesting to know if you are using Power View in your production environment. If yes, would you please share your experience over here. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Business Intelligence, Data Warehousing, PostADay, SQL, SQL Authority, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, T SQL, Technology

    Read the article

  • Incomplete mesh using DrawIndexedPrimitives after rotating mesh

    - by user1278255
    Through help on this site I was able to draw the triangles of an unrotated, nonscaled nontransformed mesh created in Blender and exported to OBJ, accurately imported through Assimp and rendered in XNA Graphics. However after applying rotation on a single axis in Blender(Z) and adding materials(I wanted to test loading of materials through Assimp) the same mesh appears incomplete. Is something wrong with my view matrix or is it something else? This is what the unrotated mesh looks like: http://www.4shared.com/photo/qXNUSvxtba/okcube.html Here is the rotated mesh: http://www.4shared.com/photo/HAys2rWvba/badcube.html Camera, View and Projection are defined as follows: cameraPos = new Vector3(0, 5, 9); viewMatrix = Matrix.CreateLookAt(cameraPos, new Vector3(0, 0, 1), new Vector3(0, 1, 0)); projectionMatrix = Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4, device.Viewport.AspectRatio, 1.0f, 200.0f); Rendering is done through this code: device.Clear(ClearOptions.Target | ClearOptions.DepthBuffer, Color.DarkSlateBlue, 1.0f, 0); effect = new BasicEffect(GraphicsDevice); effect.VertexColorEnabled = true; effect.View = viewMatrix; effect.Projection = projectionMatrix; effect.World = Matrix.Identity; foreach (EffectPass pass in effect.CurrentTechnique.Passes) { pass.Apply(); device.SetVertexBuffer(vertexBuffer); device.Indices = indexBuffer; device.DrawIndexedPrimitives(Microsoft.Xna.Framework.Graphics.PrimitiveType.TriangleList, 0, 0, oScene.Meshes[0].VertexCount, 0, mMesh.FaceCount); } base.Draw(gameTime);

    Read the article

  • Join .doc files into one .doc (with keeping the original format of every document)

    - by Shiki
    I have about ~50 .doc files, that look perfect (they are extracted with Able2Extract). Now I want to join these 50 files into one huge .doc. I've tried using Word's in-built "Insert" feature, but that messed up the whole format. I want to keep everything I have. Like just document1 - document2 - document3. Nothing "intelligent" or "smart" needed during the conversion, just the capability of joining them. (Thus making them all searchable, that's the ultimate aim.) I don't mind if the method/solution applies a single blank page at every document end either.

    Read the article

  • How hard for a Software Developer to Maintain a Server

    - by Samy
    I'm a software developer and don't have much experience as a sysadmin. I developed a web app and was considering buying a server and hosting the web app on it. Is this a huge undertaking for a web developer? What's the level of difficulty of maintaining a server and keeping up with the latest security patches and all that kind of fun stuff. I'm a single user, and not planning to sell the service to others. Can someone also recommend an OS for my case, and maybe some good learning resources that's concise and not too overwhelming.

    Read the article

  • Lessons From OpenId, Cardspace and Facebook Connect

    - by mark.wilcox
    (c) denise carbonell I think Johannes Ernst summarized pretty well what happened in a broad sense in regards to OpenId, Cardspace and Facebook Connect. However, I'm more interested in the lessons we can take away from this. First  - "Apple Lesson" - If user-centric identity is going to happen it's going to require not only technology but also a strong marketing campaign. I'm calling this the "Apple Lesson" because it's very similar to how Apple iPad saw success vs the tablet market. The iPad is not only a very good technology product but it was backed by a very good marketing plan. I know most people do not want to think about marketing here - but the fact is that nobody could really articulate why user-centric identity mattered in a way that the average person cared about. Second - "Facebook Lesson" - Facebook Connect solves a number of interesting problems that is easy for both consumer and service providers. For a consumer it's simple to log-in without any redirects. And while Facebook isn't perfect on privacy - no other major consumer-focused service on the Internet provides as much control about sharing identity information. From a developer perspective it is very easy to implement the SSO and fetch other identity information (if the user has given permission). This could only happen because a major company just decided to make a singular focus to make it happen. Third - "Developers Lesson" -  Facebook Social Graph API is by far the simplest API for accessing identity information which also is another reason why you're seeing such rapid growth in Facebook enabled Websites. By using a combination of URL and Javascript - the power a single HTML page now gives a developer writing Web applications is simply amazing. For example It doesn't get much simpler than this "http://api.facebook.com/mewilcox" for accessing identity. And while I can't yet share too much publicly about the specifics - the social graph API had a profound impact on me in designing our next generation APIs.  Posted via email from Virtual Identity Dialogue

    Read the article

  • What is the difference between apt-get and dpkg?

    - by William F. Hammond
    Both apt-get and dpkg can be used to install and remove packages. When to use which? Context: I'm in stuck in package limbo between 10.04.4 LTS and 12.04.1 LTS after an attempted upgrade via the package manager. For example, to fix things I wanted to remove "skype" so that things it depends on could be freed up. But "aptitude" (my normal package management tool) refused to remove it. The advice at http://administratosphere.wordpress.com/2011/11/02/rescuing-an-interrupted-ubuntu-upgrade/ seems helpful but not adequate to resolve my package conflicts. Also there's a strange thing where the grub menu seems not to be properly interpreted, but eventually I get the splash screen with "/ is not ready yet or not present. Continue to wait; or press S to skip or M to recover manually." Manual recovery puts me in a single user shell where I can easily remount / as rw and bring up the network. If I become myself, the command line seems quite robust, but, there seems to be no way to get X11 going.

    Read the article

  • Create Pivot collections much faster than DeepZoomTools CollectionCreator class

    - by John Conwell
    I've been playing with Microsoft Live Labs Pivot to create a hierarchy of collections all linked together to allow someone to explore a hierarchy of data visually. The problem has been the generation time of the entire hierarchy. I end up creating 500 - 600 collections total and it takes hours and hours using the CollectionCreator class that comes with the DeepZoomTools.  So digging around I found a way to make the actual DeepZoom collection creation wicked fast. Dont use the CollectionCreator!  Turns out Pivot doesnt actually use the image pyramid generated by the CollectionCreator. Or if it does, its only when you open a new collection it shows all the images zooming in. But once the zoom in is complete, Pivot uses the individual DeepZoom images. What Pivot does need is the xml generated by the CollectionCreator, which is in a very simple format.  So what i did was manually generate the xml for the collection image pyramid, and then create the folder structure required (one folder per level of the pyramid), and put a single pixel png file in each folder.  Now, I can create the required files and folders for 500 collections in about 10 seconds. Sweet! Now you still have to use the ImageCreator to create a DeepZoom image for each image in the collection and that still takes some time, but at least the total processing time is way better.

    Read the article

  • Reverting problems caused by checkinstall with gcc build

    - by slavik262
    I recently downloaded the GCC 4.6.2 source in order to play around a bit with C++11. Having been told about checkinstall and its usefulness in installing programs from source, I created a Debian package from the install using sudo checkinstall -D make install. Wanting to see how well the newly created package worked, I removed it using Synaptic Package Manager. As it turns out, the package checkinstall created from make install tried to remove every single file the installation process touched, including shared gcc libraries like /lib64/libgcc_s.so. Despite not being able to run a bunch of programs due to this missing dependency, I was able to restore my system back to normal by reinstalling the package from command line using dpkg. At this point I want to remove the package from the package manager since it's so dangerous, but not remove the installed files. I was looking around in /var/lib/dpkg and found that the package manager seems to be based on text files which list packages and such - can I just remove all mention of the package from the files in /var/lib/dpkg, or is there a safer way to go about this?

    Read the article

  • Can't change user security on folder - Business Objects XI 3.1

    - by Chris W
    I've got a single folder within the list of All Folders that I can't change any user permissions on. I'm logged in as an admin and when I view security for the folder it says I have full rights to the folder yet i can't change anything on it or it's sub folders even though it clearly shows me as having rights to "Modify the rights users have to objects". As a test I added a new sub-folder called Test which created ok but I'm not able to then delete the sub folder or change it's permissions either. Interestingly we changed permissions on one sub-folder last week without issue but when I check that folder today I now can't update it. Any ideas anyone?

    Read the article

  • wifi not working hp dv4

    - by Blaze
    i get this output for sudo lshw -C network : *-network description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:08:00.0 logical name: eth0 version: 06 serial: 3c:4a:92:cd:63:98 size: 10Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=half firmware=N/A latency=0 link=no multicast=yes port=MII speed=10Mbit/s resources: irq:42 ioport:3000(size=256) memory:c0404000-c0404fff memory:c0400000-c0403fff *-network DISABLED description: Wireless interface product: RT5390 [802.11 b/g/n 1T1R G-band PCI Express Single Chip] vendor: Ralink corp. physical id: 0 bus info: pci@0000:0a:00.0 logical name: wlan0 version: 00 serial: 90:00:4e:82:3c:5b width: 32 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=rt2800pci driverversion=3.0.0-17-generic-pae firmware=0.34 latency=0 link=no multicast=yes wireless=IEEE 802.11bgn resources: irq:17 memory:c2500000-c250ffff i cant enable the wifi using the fn+f12 key.. its just stuck. In windows 7 it gets enabled when i press the same keys. but this happens after it gets booted up... i just want to use the wifi in anyway possible. can anybody help?

    Read the article

  • The Fantastic New WebLogic on Oracle Database Appliance 2.9 Release is Here!

    - by JuergenKress
    Last week was a big day in virtualised ODA-land as it saw the launch of WebLogic on ODA 2.9. Admittedly it doesn't sound like a very exciting release but it is one that we at O-box have been looking forward to for quite some time. Let me explain why, then we'll look into the details... The ODA X4-2 has 48 Intel Xeon cores. That is a lot of compute power. Whilst the largest O-box SOA Appliance single environment configuration can in theory use all those cores (currently with 40 vCPU of SOA!) the vast majority of O-box users will want smaller configurations. Prior to 2.9 the Oracle WebLogic implementation only supported one domain per ODA, so the conundrum O-box development faced last year was either: offer customers only one SOA environment on their O-box for now (but have the benefit of a standard, easily supportable WebLogic installation), or build our own WebLogic/OTD OVM templates from scratch. One of our driving goals with O-box is to give the best possible experience and make the appliance as supportable as possible. Therefore we took the gamble that we would stick with the Oracle's one-domain WebLogic configuration initially, and just hope that it would deliver multi-domain support for us in a timely manner (note: this is probably not a strategy that business textbooks would recommend!). Anyway, we've been working closely with Oracle Product Management for a few months now and I'm delighted to see 2.9 as the fruits of their labour. This also neatly ties in with several recent requests for O-box to include OSB as well as SOA/BPEL (which we have always wanted to have in separate domains). The diagram below is the neatest way to summarise what the new 2.9 release will allow us to deliver, i.e. previously only one 3D box was possible: Read the complete article here. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: oBox,WebLogic on ODA,ODA,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • faster ( squid + apache httpd + apache tomcat )

    - by letronje
    We have a production setup where we have Squid in the front(caching images, js, css, etc) Apache httpd in the middle(prefork + mod_rewrite + mod_jk/AJP + mod_deflate + mod_php(few php pages)) Apache tomcat 5.5 at the end serving all the dynamic stuff. What would be the best way to reduce the overhead of having 3 servers in the request path ? Wondering if replacing httpd with a faster web server like nginx/lighttpd will help. httpd right now does the job of url rewriting(for clean urls) and talking to tomcat(via mod_jk) and compressing output(mod_deflate) and serving some low traffic php pages. What would be ideal replacement for httpd given that we need these features? Is there a way to replace (squid + apache) with a single entity that does caching well (like squid) for static stuff, rewrites url, compresses response and forwards dynamic stuff directly to tomcat ? heard abt varnish cache, wondering if it can help.

    Read the article

  • Win2008 DC in a Windows 2000 domain: can I keep the old DC?

    - by gravyface
    Will be putting a new Windows 2008 SE Server into a single domain network with two domain controllers, both running Windows 2000 Server. The functional level of the domain is mixed mode/2000. Until a second 2008 DC can be purchased, I'd like to leave the current Win2k operational master DC as a backup DC as the other member servers running 2003 have either accounting/SQL or Exchange on them. Eventually all the w2k servers will be decommissioned, but until then, I need another DC for redundancy. Following the standard process for adding a new DC, can I leave the old operational master DC (or the other backup DC) running after I transfer the FSMO roles to the new server? Will this cause any issues?

    Read the article

  • Simulating a low-bandwidth, high-latency network connection on Linux

    - by Justin L.
    I'd like to simulate a high-latency, low-bandwidth network connection on my Linux machine. Limiting bandwidth has been discussed before, e.g. here, but I can't find any posts which address limiting both bandwidth and latency. I can get either high latency or low bandwidth using tc. But I haven't been able to combine these into a single connection. In particular, the example rate control script here doesn't work for me: # tc qdisc add dev lo root handle 1:0 netem delay 100ms # tc qdisc add dev lo parent 1:1 handle 10: tbf rate 256kbit buffer 1600 limit 3000 RTNETLINK answers: Operation not supported How can I create a low-bandwidth, high-latency connection, using tc or any other readily-available tool?

    Read the article

  • Excel: Plot order total in map coordinates

    - by Phliplip
    I have a set of data that looks like this: -X--Y----Amount- AE 24 $178,00 Y 27 $162,00 AD 34 $680,00 AK 35 $178,00 Y 25 $29,00 U 23 $178,00 X 38 $193,00 AC 30 $226,00 AK 39 $152,00 AJ 34 $217,00 AC 35 $183,00 AA 22 $211,00 Z 19 $172,00 AJ 32 $187,00 AF 26 $272,00 AI 27 $220,00 AJ 34 $320,00 AB 32 $183,00 AB 35 $272,00 AC 32 $207,00 AB 28 $178,00 AC 30 $168,00 AC 28 $178,00 AB 32 $310,00 AD 30 $188,00 AB 35 $188,00 The sample above is only an excerpt of the total dataset of 16K rows Each row represents a single delivery order, where the 2 first columns are the map coordinate and the third the purchase amount. Would it be possible to plot the above data in a chart or coordinate system. Where the each plot should be a summary of all sales in the same map coordinate. Also a similar chart of order count would be nice to have.

    Read the article

  • Postfix count relayed messages per user

    - by Martino Dino
    I would like to know if it's possible to count the outgoing (relayed) messages on a per user basis in postfix. I'm managing a small commercial SMTP relay and decided that it would be nice to have a detailed daily report on how much mail a single user have sent (and eventually enforce some limits) possibly in realtime. I've looked almost everywhere and started to think that writing my own milter would be the way to go... Are you aware of anything that already exists for postfix that can count and report relayed mail for authenticated users (a script, milter or whatever)?

    Read the article

  • (Open Source) Cloud-Filesystem to run a Database on Top?

    - by jens
    Hello, what are current "technologies and implementations" to get a filesystem with unlimited capacity by using single servers with their harddisk to form a "grid/cloud filesystem"? I need to have unlimited space (by adding further servers) but it must be a filesysem that is capable of running a database on top. I know of Apache Hadoop but that seems not be be Ideal for running a DB on top of it (or am I wrong??) And iSCSI seems to be "remote/networked" but I do not know how and if this is clusterable? thank you very much!! jens

    Read the article

  • (Open Source) Cloud-Filesystem to run a Database on Top?

    - by jens
    Hello, what are current "technologies and implementations" to get a filesystem with unlimited capacity by using single servers with their harddisk to form a "grid/cloud filesystem"? I need to have unlimited space (by adding further servers) but it must be a filesysem that is capable of running a database on top. I know of Apache Hadoop but that seems not be be Ideal for running a DB on top of it (or am I wrong??) And iSCSI seems to be "remote/networked" but I do not know how and if this is clusterable? thank you very much!! jens

    Read the article

  • How-to read data from selected tree node

    - by Frank Nimphius
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} By default, the SelectionListener property of an ADF bound tree points to the makeCurrent method of the FacesCtrlHierBinding class in ADF to synchronize the current row in the ADF binding layer with the selected tree node. To customize the selection behavior, or just to read the selected node value in Java, you override the default configuration with an EL string pointing to a managed bean method property. In the following I show how you change the selection listener while preserving the default ADF selection behavior. To change the SelectionListener, select the tree component in the Structure Window and open the Oracle JDeveloper Property Inspector. From the context menu, select the Edit option to create a new listener method in a new or an existing managed bean. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} For this example, I created a new managed bean. On tree node select, the managed bean code prints the selected tree node value(s) import java.util.List; import javax.el.ELContext; import javax.el.ExpressionFactory; import javax.el.MethodExpression; import javax.faces.application.Application; import javax.faces.context.FacesContext; import java.util.Iterator; import oracle.adf.view.rich.component.rich.data.RichTree; import oracle.jbo.Row; import oracle.jbo.uicli.binding.JUCtrlHierBinding; import oracle.jbo.uicli.binding.JUCtrlHierNodeBinding; import org.apache.myfaces.trinidad.event.SelectionEvent; import org.apache.myfaces.trinidad.model.CollectionModel; import org.apache.myfaces.trinidad.model.RowKeySet; import org.apache.myfaces.trinidad.model.TreeModel; public class TreeSampleBean { public TreeSampleBean() {} public void onTreeSelect(SelectionEvent selectionEvent) { //original selection listener set by ADF //#{bindings.allDepartments.treeModel.makeCurrent} String adfSelectionListener = "#{bindings.allDepartments.treeModel.makeCurrent}";   //make sure the default selection listener functionality is //preserved. you don't need to do this for multi select trees //as the ADF binding only supports single current row selection     /* START PRESERVER DEFAULT ADF SELECT BEHAVIOR */ FacesContext fctx = FacesContext.getCurrentInstance(); Application application = fctx.getApplication(); ELContext elCtx = fctx.getELContext(); ExpressionFactory exprFactory = application.getExpressionFactory();   MethodExpression me = null;   me = exprFactory.createMethodExpression(elCtx, adfSelectionListener,                                           Object.class, newClass[]{SelectionEvent.class});   me.invoke(elCtx, new Object[] { selectionEvent });     /* END PRESERVER DEFAULT ADF SELECT BEHAVIOR */   RichTree tree = (RichTree)selectionEvent.getSource(); TreeModel model = (TreeModel)tree.getValue();  //get selected nodes RowKeySet rowKeySet = selectionEvent.getAddedSet();   Iterator rksIterator = rowKeySet.iterator();   //for single select configurations,this only is called once   while (rksIterator.hasNext()) {     List key = (List)rksIterator.next();     JUCtrlHierBinding treeBinding = null;     CollectionModel collectionModel = (CollectionModel)tree.getValue();     treeBinding = (JUCtrlHierBinding)collectionModel.getWrappedData();     JUCtrlHierNodeBinding nodeBinding = null;     nodeBinding = treeBinding.findNodeByKeyPath(key);     Row rw = nodeBinding.getRow();     //print first row attribute. Note that in a tree you have to     //determine the node type if you want to select node attributes     //by name and not index      String rowType = rw.getStructureDef().getDefName();       if(rowType.equalsIgnoreCase("DepartmentsView")){      System.out.println("This row is a department: " +                          rw.getAttribute("DepartmentId"));     }     else if(rowType.equalsIgnoreCase("EmployeesView")){      System.out.println("This row is an employee: " +                          rw.getAttribute("EmployeeId"));     }        else{       System.out.println("Huh????");     }     // ... do more useful stuff here   } } -------------------- Download JDeveloper 11.1.2.1 Sample Workspace

    Read the article

  • Keeping packages on a large number of openSUSE servers updated

    - by Kamil Kisiel
    Question for anyone out there managing a network of openSUSE machines. How do you keep track of and apply updates? I know about YaST Online Update (YOU) but it seems more geared towards keeping a single machine up to date. It doesn't seem to scale well to a larger number of machines. How do you keep your machines updated? Our network is fairly heterogenous in terms of package installation as the servers are mostly infrastructure machines with varying roles. I know that SUSE Linux Enterprise has tools to manage updates network-wide, but updating to that is currently not an option for budget reasons.

    Read the article

  • Standards Matter: The Battle For Interoperability Continues

    - by michael.rowell
    Great Article, although it is a little dated at this point. Information Week Article Standards Matter: The Battle for Interoperability goes on Summary If you're guilty of relegating standards support to a "nice to have" feature rather than a requirement, you're part of the problem. If you want products to interoperate, be prepared to walk away if a vendor can't prove compliance. Don't be brushed off with promises of standards support "on the road map." The alternative is vendor lock-in and higher costs, including the cost of maintaining systems that don't work together. Standards bodies are imperfect and must do better. The alternative: splintered networks and broken promises. The point: "The secret sauce to a successful 'working standard' isn't necessarily IETF or another longstanding body," says Jonathan Feldman, director of IT services for the city of Asheville, N.C., and an InformationWeek Analytics contributor. "Rather, an earnest and honest effort by a group that has governance outside of a single corporation's control is what's important." In order to have true interoperability vendors as well as customers must be actively engaged in the standards process. Vendors must be willing to truly work together and not be protecting an existing product. Customers must also be willing to truly to work together and not be demanding a solution that only meets their needs but instead meets the needs of all participants. Ultimately, customers must be willing to reward vendor compliance by requiring compliance in products and services that they purchase and deploy. Managers that deploy systems without compliance to standards are only hurting themselves. Standards do matter. When developed openly and deployed compliantly standards deliver interoperability which provides solid business value.

    Read the article

< Previous Page | 370 371 372 373 374 375 376 377 378 379 380 381  | Next Page >