Search Results

Search found 2023 results on 81 pages for 'matt anderson'.

Page 79/81 | < Previous Page | 75 76 77 78 79 80 81  | Next Page >

  • Ubuntu 10.10 forgets desktop theme.

    - by Marcelo Cantos
    I am running Ubuntu in VirtualBox (on a Windows 7 host). Several times now, the top-level menu bar, the task bar — and seemingly every system dialog — have forgotten the out-of-the-box "Ambiance" theme they conform to when I first installed the system. Window captions still preserve the theme, but pretty much nothing else does. I have searched high and low on Google for assistance with this problem. Everything I've found suggests either running some gconf reset or deleting .gconf* .gnome* and other similar directories. I have followed all this advice and nothing works. I still get a boring Windows-95-style gray 3D look and feel. On previous occasions, after much messing around I've given up and rebooted the VM instance, and been pleasantly suprised to see the original "Ambience" theme restored throughout the UI, but invariably it disappears again some time later, usually after a reboot, so I can never figure out what I did that broke it. Here's a sample from Ubuntu's site of what I want it to look like. And here's a screenshot of my system as it currently looks. Also note that my GNOME Terminals normally have a nice purple semi-translucent look, and as can be seen from the screenshot, they are now just a solid matt white. This last time (just this morning), trying numerous combinations all the usual tricks and rebooting several times hasn't fixed it, so here I am on SU wondering: How do I recover the out-of-the-box theme for my Gnome/Ubuntu desktop, noting that blowing away all config files — as suggested in many places online — fails to achieve this? EDIT: It might help to know that it seems to fail either after I resize the VM instance, forcing the Ubuntu desktop to resize itself, or after I play around with Compiz settings. I haven't been able to figure out which of these it is, and it could be neither. Given the amount of pain I have had to go through to get things back to normal (and given that I am at a loss as to how to do so), it has proven difficult to definitively isolate the cause.

    Read the article

  • iptables blank after reboot

    - by theillien
    We've started encountering an issue with iptables on our RHEL 6.3 systems in that after a reboot, when the service starts, the rules are not loaded. We get the empty ruleset: [msnyder@matt-test ~]$ sudo iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination This is in spite of the fact that we have rules defined and the service is, indeed, running. That I know because when I run service iptables start it simply drops back to the prompt. If I run service iptables restart it actually stops and then restarts the service. And, of course, if I run service iptables stop it indicates that iptables is actually stopping. Knowing that I need to restart the service, I do so and the rules load up properly. They simply don't get loaded after a reboot. Unless they get loaded differently during a reboot I don't see how our rules would be wrong. If they were, they wouldn't even load during a service restart. Has anyone else ever encountered this? EDIT: The rules are already saved in /etc/sysconfig/iptables. They are not added on the fly from the command line so service iptables save is unnecessary.

    Read the article

  • Exchange 2010 OWA - a few questions about using multiple mailboxes

    - by Alexey Smolik
    We have an Exchange 2010 SP2 deployment and we need that our users could access multiple mailboxes in OWA. The problem is that a user (eg John Smith) needs to access not just somebody else's (eg Tom Anderson) mailboxes, but his OWN mailboxes, e.g. in different domains: [email protected], [email protected], [email protected], etc. Of course it is preferable for the user to work with all of his mailboxes from a single window. Such mailboxes can be added as multiple Exchange accounts in Outlook, that works almost fine. But in OWA, there are problems: 1) In the left pane - as I've learned - we can open only Inbox folders from other mailboxes. No way to view all folders like in Outlook? 2) With Send-As permissions set, when trying to send a message from another address, that message is saved in the Sent Items folder of the mailbox that is opened in OWA, and not in the mailbox the message is sent from. The same thing with the trash can. Is there a way to fix that? Also, this problem exists in desktop Outlook when mailboxes are added automatically via the Auto Mapping feature, so that we need to turn it off and add the accounts manually. Is there a simpler workaround? 3) Okay, suppose we only open Inbox folders in the left pane. The problem is that the mailbox names shown there are formed from Display Name attributes. But those names are all identical! All the mailboxes are owned by John Smith, so they should be all named John Smith - so that letter recepient sees "John Smith" in the "from" field, no matter what mailbox it is sent from. Also, the user knows what's his name - no need to tell him. He wants to know what mailbox he works with. So we need a way to either: a) customize OWA to show mailbox email address instead of user Display Name, or b) make Exchange use another attribute to put in the "from" field when sending letters 4) Okay, we can switch between mailboxes using "Open Other Mailbox" in the upper-right corner menu. But: a) To select a mailbox we need to enter its name (or first letters). It there a way to show a list of links to mailboxes the user has full access to? Eg in the page header... b) If we start entering the first letters, we see a popup list with possible mailboxes to be opened. But there are all mailboxes (apparently from GAL), not only mailboxes the user has permission to open! How to filter that popup list? c) The same problem as in (3) with mailbox naming. We can see the opened mailbox email address ONLY in the page URL, which is insufficient for many users. In the left pane we see "John Smith" which is useless. 5) Each mailbox is tied with a separate user in AD. If one has several mailboxes, we need to have additional dummy AD accounts, create additional OUs to store them, etc. That's not very nice, is there any standartized, optimal way to build such a structure? We would really appreciate any answers or additional info for any of these questions. Thank you in advance.

    Read the article

  • Using jQuery and SPServices to Display List Items

    - by Bil Simser
    I had an interesting challenge recently that I turned to Marc Anderson’s wonderful SPServices project for. If you haven’t already seen or used SPServices, please do. It’s a jQuery library that does primarily two things. First, it wraps up all of the SharePoint web services in a nice little AJAX wrapper for use in JavaScript. Second, it enhances the form editing of items in SharePoint so you’re not hacking up your List Form pages. My challenge was simple but interesting. The user wanted to display a SharePoint item page (DispForm.aspx, which already had some customization on it to display related items via this blog post from Codeless Solutions for SharePoint) but launch from an external application using the value of one of the fields in the SharePoint list. For simplicity let’s say my list is a list of customers and the related list is a list of orders for that customer. It would look something like this (click on the item to see the full image): Your first thought might be, that’s easy! Display the customer information using a DataView Web Part and filter the item using a query string to match the customer number. However there are a few problems with this idea: You’ll need to build a custom page and then attach that related orders view to it. This is a bit of a problem because the solution from Codeless Solutions relies on the Title field on the page to be displayed. On a custom page you would have to recreate all of the elements found on the DispForm.aspx page so the related view would work. The DataView Web Part doesn’t look *exactly* like what the out of the box display form page does. Not a huge problem and can be overcome with some CSS style overrides but still, more work. A DVWP showing a single record doesn’t have the same toolbar that you would using the DispForm.aspx. Not a show-stopper and you can rebuild the toolbar but it’s going to potentially require code and then there’s the security trimming, etc. that you have to get right. DVWPs are not automatically updated if you add a column to the list like DispForm.aspx is. Work, work, work. For these reasons I thought it would be easier to take the already existing (modified) DispForm.aspx page and just add some jQuery magic to the page to find the item. Why do we need to find it? DispForm.aspx relies on a querystring parameter called “ID” which then displays whatever that item ID number is in the list. Trouble is, when you’re coming in from an external app via a link, you don’t know what that internal ID is (and frankly shouldn’t). I don’t like exposing internal SharePoint IDs to the outside world for the same reason I don’t do it with database IDs. They’re internal and while it’s find to use on the site itself you don’t want external links using it. It’s volatile and can change (delete one item then re-add it back with the same data and watch any ID references break). The next thought might be to call a SharePoint web service with a CAML query to get the item ID number using some criteria (in this case, the customer number). That’s great if you have that ability but again we had an existing application we were just adding a link to. The last thing I wanted to do was to crack open the code on that sucker and start calling web services (primarily because it’s Java, but really I’m a lazy geek). However if you’re doing this and have access to call a web service that would be an option. Back to this problem, how do I a) find a SharePoint List Item based on some field value other than ID and b) make it low impact so I can just construct a URL to it? That’s where jQuery and SPServices came to the rescue. After spending a few hours of emails back and forth with Marc and a couple of phone calls (and updating jQuery to the latest version, duh!) it was a simple answer. First we need a reference to a) jQuery b) SPServices and c) our script. I just dropped a Content Editor Web Part, the Swiss Army Knives of Web Parts, onto the DispForm.aspx page and added these lines: <script type="text/javascript" src="http://intranet/JavaScript/jquery-1.4.2.min.js"></script> <script type="text/javascript" src="http://intranet/JavaScript/jquery.SPServices-0.5.3.min.js"></script> <script type="text/javascript" src="http://intranet/JavaScript/RedirectToID.js"> </script> Update it to point to where you keep your scripts located. I prefer to keep them all in Document Libraries as I can make changes to them without having to remote into the server (and on a multiple web front end, that’s just a PITA), it provides me with version control of sorts, and it’s quick to add new plugins and scripts. Now we can look at our RedirectToID.js script. This invokes the SPServices Library to call the GetListItems method of the Lists web service and then rewrites the URL to DispForm.aspx to use the correct SharePoint ID (the internal one). $(document).ready(function(){ var queryStringValues = $().SPServices.SPGetQueryString(); var id = queryStringValues["ID"]; if(id == "0") { var customer = queryStringValues["CustomerNumber"]; var query = "<Query><Where><Eq><FieldRef Name='CustomerNumber'/><Value Type='Text'>" + customer + "</Value></Eq></Where></Query>"; var url = window.location; $().SPServices({ operation: "GetListItems", listName: "Customers", async: false, CAMLQuery: query, completefunc: function (xData, Status) { $(xData.responseXML).find("[nodeName=z:row]").each(function(){ id = $(this).attr("ows_ID"); url = $().SPServices.SPGetCurrentSite() + "/Lists/Customers/DispForm.aspx?ID=" + id; window.location = url; }); } }); } }); What’s happening here? Line 3: We call SPServices.SPGetQueryString to get an array of query string values (a handy function in the library as I had 15 lines of code to do this which is now gone). Line 4: Extract the ID value from the query string Line 6: If we pass in “0” it means we’re looking up a field value. This allows DispForm.aspx to work like normal with SharePoint lists but lookup our values when invoked. Why ID at all? DispForm.aspx doesn’t work unless you pass in something and “0” is a *magic* number that will invoke the page but not lookup a value in the database. Line 8-15: Extract the CustomerNumber query string value, build a CAML query to find it then call the GetListitems method using SPServices Line 16: Process the results in our completefunc to iterate over all the rows (there should only be one) and extract the real ID of the item Line 17-20: Build a new URL based on the site (using a call to SPGetCurrentSite) and append our real ID to redirect to the DispForm.aspx page As you can see, it dynamically creates a CAML query for the call to the web service using the passed in value. You could even make this generic to take in different query strings, one for the field name to search for and the other for the value to find. That way it could be used for any field you want. For example you could bring up the correct item on the DispForm.aspx page based on customer name with something like this: http://myserver/Lists/Customers/DispForm.aspx?ID=0&FilterId=CustomerName&FilterValue=Sony Use your imagination. Some people would opt for building a custom page with a DVWP but if you want to leverage all the functionality of DispForm.aspx this might come in handy if you don’t want to rely on internal SharePoint IDs.

    Read the article

  • Solaris 11 Launch Blog Carnival Roundup

    - by constant
    Solaris 11 is here! And together with the official launch activities, a lot of Oracle and non-Oracle bloggers contributed helpful and informative blog articles to help your datacenter go to eleven. Here are some notable blog postings, sorted by category for your Solaris 11 blog-reading pleasure: Getting Started/Overview A lot of people speculated that the official launch of Solaris 11 would be on 11/11 (whatever way you want to turn it), but it actually happened two days earlier. Larry Wake himself offers 11 Reasons Why Oracle Solaris 11 11/11 Isn't Being Released on 11/11/11. Then, Larry goes on with a summary: Oracle Solaris 11: The First Cloud OS gives you a short and sweet rundown of what the major new features of Solaris 11 are. Jeff Victor has his own list of What's New in Oracle Solaris 11. A popular Solaris 11 meme is to write a blog post about 11 favourite features: Jim Laurent's 11 Reasons to Love Solaris 11, Darren Moffat's 11 Favourite Solaris 11 Features, Mike Gerdt's 11 of My Favourite Things! are just three examples of "11 Favourite Things..." type blog posts, I'm sure many more will follow... More official overview content for Solaris 11 is available from the Oracle Tech Network Solaris 11 Portal. Also, check out Rick Ramsey's blog post Solaris 11 Resources for System Administrators on the OTN Blog and his secret 5 Commands That Make Solaris Administration Easier post from the OTN Garage. (Automatic) Installation and the Image Packaging System (IPS) The brand new Image Packaging System (IPS) and the Automatic Installer (IPS), together with numerous other install/packaging/boot/patching features are among the most significant improvements in Solaris 11. But before installing, you may wonder whether Solaris 11 will support your particular set of hardware devices. Again, the OTN Garage comes to the rescue with Rick Ramsey's post How to Find Out Which Devices Are Supported By Solaris 11. Included is a useful guide to all the first steps to get your Solaris 11 system up and running. Tim Foster had a whole handful of blog posts lined up for the launch, teaching you everything you need to know about IPS but didn't dare to ask: The IPS System Repository, IPS Self-assembly - Part 1: Overlays and Part 2: Multiple Packages Delivering Configuration. Watch out for more IPS posts from Tim! If installing packages or upgrading your system from the net makes you uneasy, then you're not alone: Jim Laurent will tech you how Building a Solaris 11 Repository Without Network Connection will make your life easier. Many of you have already peeked into the future by installing Solaris 11 Express. If you're now wondering whether you can upgrade or whether a fresh install is necessary, then check out Alan Hargreaves's post Upgrading Solaris 11 Express b151a with support to Solaris 11. The trick is in upgrading your pkg(1M) first. Networking One of the first things to do after installing Solaris 11 (or any operating system for that matter), is to set it up for networking. Solaris 11 comes with the brand new "Network Auto-Magic" feature which can figure out everything by itself. For those cases where you want to exercise a little more control, Solaris 11 left a few people scratching their heads. Fortunately, Tschokko wrote up this cool blog post: Solaris 11 manual IPv4 & IPv6 configuration right after the launch ceremony. Thanks, Tschokko! And Milek points out a long awaited networking feature in Solaris 11 called Solaris 11 - hostmodel, which I know for a fact that many customers have looked forward to: How to "bind" a Solaris 11 system to a specific gateway for specific IP address it is using. Steffen Weiberle teaches us how to tune the Solaris 11 networking stack the proper way: ipadm(1M). No more fiddling with ndd(1M)! Check out his tutorial on Solaris 11 Network Tunables. And if you want to get even deeper into the networking stack, there's nothing better than DTrace. Alan Maguire teaches you in: DTracing TCP Congestion Control how to probe deeply into the Solaris 11 TCP/IP stack, the TCP congestion control part in particular. Don't miss his other DTrace and TCP related blog posts! DTrace And there we are: DTrace, the king of all observability tools. Long time DTrace veteran and co-author of The DTrace book*, Brendan Gregg blogged about Solaris 11 DTrace syscall provider changes. BTW, after you install Solaris 11, check out the DTrace toolkit which is installed by default in /usr/dtrace/DTT. It is chock full of handy DTrace scripts, many of which contributed by Brendan himself! Security Another big theme in Solaris 11, and one that is crucial for the success of any operating system in the Cloud is Security. Here are some notable posts in this category: Darren Moffat starts by showing us how to completely get rid of root: Completely Disabling Root Logins on Solaris 11. With no root user, there's one major entry point less to worry about. But that's only the start. In Immutable Zones on Encrypted ZFS, Darren shows us how to double the security of your services: First by locking them into the new Immutable Zones feature, then by encrypting their data using the new ZFS encryption feature. And if you're still missing sudo from your Linux days, Darren again has a solution: Password (PAM) caching for Solaris su - "a la sudo". If you're wondering how much compute power all this encryption will cost you, you're in luck: The Solaris X86 AESNI OpenSSL Engine will make sure you'll use your Intel's embedded crypto support to its fullest. And if you own a brand new SPARC T4 machine you're even luckier: It comes with its own SPARC T4 OpenSSL Engine. Dan Anderson's posts show how there really is now excuse not to encrypt any more... Developers Solaris 11 has a lot to offer to developers as well. Ali Bahrami has a series of blog posts that cover diverse developer topics: elffile: ELF Specific File Identification Utility, Using Stub Objects and The Stub Proto: Not Just For Stub Objects Anymore to name a few. BTW, if you're a developer and want to shape the future of Solaris 11, then Vijay Tatkar has a hint for you: Oracle (Sun Systems Group) is hiring! Desktop and Graphics Yes, Solaris 11 is a 100% server OS, but it can also offer a decent desktop environment, especially if you are a developer. Alan Coopersmith starts by discussing S11 X11: ye olde window system in today's new operating system, then Calum Benson shows us around What's new on the Solaris 11 Desktop. Even accessibility is a first-class citizen in the Solaris 11 user interface. Peter Korn celebrates: Accessible Oracle Solaris 11 - released! Performance Gone are the days of "Slowaris", when Solaris was among the few OSes that "did the right thing" while others cut corners just to win benchmarks. Today, Solaris continues doing the right thing, and it delivers the right performance at the same time. Need proof? Check out Brian's BestPerf blog with continuous updates from the benchmarking lab, including Recent Benchmarks Using Oracle Solaris 11! Send Me More Solaris 11 Launch Articles! These are just a few of the more interesting blog articles that came out around the Solaris 11 launch, I'm sure there are many more! Feel free to post a comment below if you find a particularly interesting blog post that hasn't been listed so far and share your enthusiasm for Solaris 11! *Affiliate link: Buy cool stuff and support this blog at no extra cost. We both win! var flattr_uid = '26528'; var flattr_tle = 'Solaris 11 Launch Blog Carnival Roundup'; var flattr_dsc = '<strong>Solaris 11 is here!</strong>And together with the official launch activities, a lot of Oracle and non-Oracle bloggers contributed helpful and informative blog articles to help your datacenter <a href="http://en.wikipedia.org/wiki/Up_to_eleven">go to eleven</a>.Here are some notable blog postings, sorted by category for your Solaris 11 blog-reading pleasure:'; var flattr_tag = 'blogging,digest,Oracle,Solaris,solaris,solaris 11'; var flattr_cat = 'text'; var flattr_url = 'http://constantin.glez.de/blog/2011/11/solaris-11-launch-blog-carnival-roundup'; var flattr_lng = 'en_GB'

    Read the article

  • Interesting articles and blogs on SPARC T4

    - by mv
    Interesting articles and blogs on SPARC T4 processor   I have consolidated all the interesting information I could get on SPARC T4 processor and its hardware cryptographic capabilities.  Hope its useful. 1. Advantages of SPARC T4 processor  Most important points in this T4 announcement are : "The SPARC T4 processor was designed from the ground up for high speed security and has a cryptographic stream processing unit (SPU) integrated directly into each processor core. These accelerators support 16 industry standard security ciphers and enable high speed encryption at rates 3 to 5 times that of competing processors. By integrating encryption capabilities directly inside the instruction pipeline, the SPARC T4 processor eliminates the performance and cost barriers typically associated with secure computing and makes it possible to deliver high security levels without impacting the user experience." Data Sheet has more details on these  : "New on-chip Encryption Instruction Accelerators with direct non-privileged support for 16 industry-standard cryptographic algorithms plus random number generation in each of the eight cores: AES, Camellia, CRC32c, DES, 3DES, DH, DSA, ECC, Kasumi, MD5, RSA, SHA-1, SHA-224, SHA-256, SHA-384, SHA-512" I ran "isainfo -v" command on Solaris 11 Sparc T4-1 system. It shows the new instructions as expected  : $ isainfo -v 64-bit sparcv9 applications crc32c cbcond pause mont mpmul sha512 sha256 sha1 md5 camellia kasumi des aes ima hpc vis3 fmaf asi_blk_init vis2 vis popc 32-bit sparc applications crc32c cbcond pause mont mpmul sha512 sha256 sha1 md5 camellia kasumi des aes ima hpc vis3 fmaf asi_blk_init vis2 vis popc v8plus div32 mul32  2.  Dan Anderson's Blog have some interesting points about how these can be used : "New T4 crypto instructions include: aes_kexpand0, aes_kexpand1, aes_kexpand2,         aes_eround01, aes_eround23, aes_eround01_l, aes_eround_23_l, aes_dround01, aes_dround23, aes_dround01_l, aes_dround_23_l.       Having SPARC T4 hardware crypto instructions is all well and good, but how do we access it ?      The software is available with Solaris 11 and is used automatically if you are running Solaris a SPARC T4.  It is used internally in the kernel through kernel crypto modules.  It is available in user space through the PKCS#11 library." 3.   Dans' Blog on Where's the Crypto Libraries? Although this was written in 2009 but still is very useful  "Here's a brief tour of the major crypto libraries shown in the digraph:   The libpkcs11 library contains the PKCS#11 API (C_\*() functions, such as C_Initialize()). That in turn calls library pkcs11_softtoken or pkcs11_kernel, for userland or kernel crypto providers. The latter is used mostly for hardware-assisted cryptography (such as n2cp for Niagara2 SPARC processors), as that is performed more efficiently in kernel space with the "kCF" module (Kernel Crypto Framework). Additionally, for Solaris 10, strong crypto algorithms were split off in separate libraries, pkcs11_softtoken_extra libcryptoutil contains low-level utility functions to help implement cryptography. libsoftcrypto (OpenSolaris and Solaris Nevada only) implements several symmetric-key crypto algorithms in software, such as AES, RC4, and DES3, and the bignum library (used for RSA). libmd implements MD5, SHA, and SHA2 message digest algorithms" 4. Difference in T3 and T4 Diagram in this blog is good and self explanatory. Jeff's blog also highlights the differences  "The T4 servers have improved crypto acceleration, described at https://blogs.oracle.com/DanX/entry/sparc_t4_openssl_engine. It is "just built in" so administrators no longer have to assign crypto accelerator units to domains - it "just happens". Every physical or virtual CPU on a SPARC-T4 has full access to hardware based crypto acceleration at all times. .... For completeness sake, it's worth noting that the T4 adds more crypto algorithms, and accelerates Camelia, CRC32c, and more SHA-x." 5. About performance counters In this blog, performance counters are explained : "Note that unlike T3 and before, T4 crypto doesn't require kernel modules like ncp or n2cp, there is no visibility of crypto hardware with kstats or cryptoadm. T4 does provide hardware counters for crypto operations.  You can see these using cpustat: cpustat -c pic0=Instr_FGU_crypto 5 You can check the general crypto support of the hardware and OS with the command "isainfo -v". Since T4 crypto's implementation now allows direct userland access, there are no "crypto units" visible to cryptoadm.  " For more details refer Martin's blog as well. 6. How to turn off  SPARC T4 or Intel AES-NI crypto acceleration  I found this interesting blog from Darren about how to turn off  SPARC T4 or Intel AES-NI crypto acceleration. "One of the new Solaris 11 features of the linker/loader is the ability to have a single ELF object that has multiple different implementations of the same functions that are selected at runtime based on the capabilities of the machine.   The alternate to this is having the application coded to call getisax(2) system call and make the choice itself.  We use this functionality of the linker/loader when we build the userland libraries for the Solaris Cryptographic Framework (specifically libmd.so and libsoftcrypto.so) The Solaris linker/loader allows control of a lot of its functionality via environment variables, we can use that to control the version of the cryptographic functions we run.  To do this we simply export the LD_HWCAP environment variable with values that tell ld.so.1 to not select the HWCAP section matching certain features even if isainfo says they are present.  This will work for consumers of the Solaris Cryptographic Framework that use the Solaris PKCS#11 libraries or use libmd.so interfaces directly.  For SPARC T4 : export LD_HWCAP="-aes -des -md5 -sha256 -sha512 -mont -mpul" .. For Intel systems with AES-NI support: export LD_HWCAP="-aes"" Note that LD_HWCAP is explained in  http://docs.oracle.com/cd/E23823_01/html/816-5165/ld.so.1-1.html "LD_HWCAP, LD_HWCAP_32, and LD_HWCAP_64 -  Identifies an alternative hardware capabilities value... A “-” prefix results in the capabilities that follow being removed from the alternative capabilities." 7. Whitepaper on SPARC T4 Servers—Optimized for End-to-End Data Center Computing This Whitepaper on SPARC T4 Servers—Optimized for End-to-End Data Center Computing explains more details.  It has DTrace scripts which may come in handy : "To ensure the hardware-assisted cryptographic acceleration is configured to use and working with the security scenarios, it is recommended to use the following Solaris DTrace script. #!/usr/sbin/dtrace -s pid$1:libsoftcrypto:yf*:entry, pid$target:libsoftcrypto:rsa*:entry, pid$1:libmd:yf*:entry { @[probefunc] = count(); } tick-1sec { printa(@ops); trunc(@ops); }" Note that I have slightly modified the D Script to have RSA "libsoftcrypto:rsa*:entry" as well as per recommendations from Chi-Chang Lin. 8. References http://www.oracle.com/us/corporate/features/sparc-t4-announcement-494846.html http://www.oracle.com/us/products/servers-storage/servers/sparc-enterprise/t-series/sparc-t4-1-ds-487858.pdf https://blogs.oracle.com/DanX/entry/sparc_t4_openssl_engine https://blogs.oracle.com/DanX/entry/where_s_the_crypto_libraries https://blogs.oracle.com/darren/entry/howto_turn_off_sparc_t4 http://docs.oracle.com/cd/E23823_01/html/816-5165/ld.so.1-1.html   https://blogs.oracle.com/hardware/entry/unleash_the_power_of_cryptography https://blogs.oracle.com/cmt/entry/t4_crypto_cheat_sheet https://blogs.oracle.com/martinm/entry/t4_performance_counters_explained  https://blogs.oracle.com/jsavit/entry/no_mau_required_on_a http://www.oracle.com/us/products/servers-storage/servers/sparc-enterprise/t-series/sparc-t4-business-wp-524472.pdf

    Read the article

  • What is SharePoint Out of the Box?

    - by Bil Simser
    It’s always fun in the blog-o-sphere and SharePoint bloggers always keep the pot boiling. Bjorn Furuknap recently posted a blog entry titled Why Out-of-the-Box Makes No Sense in SharePoint, quickly followed up by a rebuttal by Marc Anderson on his blog. Okay, now that we have all the players and the stage what’s the big deal? Bjorn started his post saying that you don’t use “out-of-the-box” (OOTB) SharePoint because it makes no sense. I have to disagree with his premise because what he calls OOTB is basically installing SharePoint and admiring it, but not using it. In his post he lays claim that modifying say the OOTB contacts list by removing (or I suppose adding) a column, now puts you in a situation where you’re no longer using the OOTB functionality. Really? Side note. Dear Internet, please stop comparing building software to building houses. Or comparing software architecture to building architecture. Or comparing web sites to making dinner. Are you trying to dumb down something so the general masses understand it? Comparing a technical skill to a construction operation isn’t the way to do this. Last time I checked, most people don’t know how to build houses and last time I checked people reading technical SharePoint blogs are generally technical people that understand the terms you use. Putting metaphors around software development to make it easy to understand is detrimental to the goal. </rant> Okay, where were we? Right, adding columns to lists means you are no longer using the OOTB functionality. Yeah, I still don’t get it. Another statement Bjorn makes is that using the OOTB functionality kills the flexibility SharePoint has in creating exactly what you want. IMHO this really flies in the absolute face of *where* SharePoint *really* shines. For the past year or so I’ve been leaning more and more towards OOTB solutions over custom development for the simple reason that its expensive to maintain systems and code and assets. SharePoint has enabled me to do this simply by providing the tools where I can give users what they need without cracking open up Visual Studio. This might be the fact that my day job is with a regulated company and there’s more scrutiny with spending money on anything new, but frankly that should be the position of any responsible developer, architect, manager, or PM. Do you really want to throw money away because some developer tells you that you need a custom web part when perhaps with some creative thinking or expectation setting with customers you can meet the need with what you already have. The way I read Bjorn’s terminology of “out-of-the-box” is install the software and tell people to go to a website and admire the OOTB system, but don’t change it! For those that know things like WordPress, DotNetNuke, SubText, Drupal or any of those content management/blogging systems, its akin to installing the software and setting up the “Hello World” blog post or page, then staring at it like it’s useful. “Yes, we are using WordPress!”. Then not adding a new post, creating a new category, or adding an About page. Perhaps I’m wrong in my interpretation. This leads us to what is OOTB SharePoint? To many people I’ve talked to the last few hours on twitter, email, etc. it is *not* just installing software but actually using it as it was fit for purpose. What’s the purpose of SharePoint then? It has many purposes, but using the OOTB templates Microsoft has given you the ability to collaborate on projects, author/share/publish documents, create pages, track items/contacts/tasks/etc. in a multi-user web based interface, and so on. Microsoft has pretty clear definitions of these different levels of SharePoint we’re talking about and I think it’s important for everyone to know what they are and what they mean. Personalization and Administration To me, this is the OOTB experience. You install the product and then are able to do things like create new lists, sites, edit and personalize pages, create new views, etc. Basically use the platform services available to you with Windows SharePoint Services (or SharePoint Foundation in 2010) to your full advantage. No code, no special tools needed, and very little user training required. Could you take someone who has never done anything in a website or piece of software and unleash them onto a site? Probably not. However I would argue that anyone who’s configured the Outlook reading layout or applied styles to a Word document probably won’t have too much difficulty in using SharePoint OUT OF THE BOX. Customization Here’s where things might get a bit murky but to me this is where you start looking at HTML/ASPX page code through SharePoint Designer, using jQuery scripts and plugging them into Web Part Pages via a Content Editor Web Part, and generally enhancing the site. The JavaScript debate might kick in here claiming it’s no different than C#, and frankly you can totally screw a site up with jQuery on a CEWP just as easily as you can with a C# delegate control deployed to the server file system. However (again, my blog, my opinion) the customization label comes in when I need to access the server (for example creating a custom theme) or have some kind of net-new element I add to the system that wasn’t there OOTB. It’s not content (like a new list or site), it’s code and does something functional. Development Here’s were the propeller hats come on and we’re talking algorithms and unit tests and compilers oh my. Software is deployed to the server, people are writing solutions after some kind of training (perhaps), there might be some specialized tools they use to craft and deploy the solutions, there’s the possibility of exceptions being thrown, etc. There are a lot of definitions here and just like customization it might get murky (do you let non-developers build solutions using development, i.e. jQuery/C#?). In my experience, it’s much more cost effective keeping solutions under the first two umbrellas than leaping into the third one. Arguably you could say that you can’t build useful solutions without *some* kind of code (even just some simple jQuery). I think you can get a *lot* of value just from using the OOTB experience and I don’t think you’re constraining your users that much. I’m not saying Marc or Bjorn are wrong. Like Obi-Wan stated, they’re both correct “from a certain point of view”. To me, SharePoint Out of the Box makes total sense and should not be dismissed. I just don’t agree with the premise that Bjorn is basing his statements on but that’s just my opinion and his is different and never the twain shall meet.

    Read the article

  • Bind to a method in WPF?

    - by Cameron MacFarland
    How do you bind to an objects method in this scenario in WPF? public class RootObject { public string Name { get; } public ObservableCollection<ChildObject> GetChildren() {...} } public class ChildObject { public string Name { get; } } XAML: <TreeView ItemsSource="some list of RootObjects"> <TreeView.Resources> <HierarchicalDataTemplate DataType="{x:Type data:RootObject}" ItemsSource="???"> <TextBlock Text="{Binding Path=Name}" /> </HierarchicalDataTemplate> <HierarchicalDataTemplate DataType="{x:Type data:ChildObject}"> <TextBlock Text="{Binding Path=Name}" /> </HierarchicalDataTemplate> </TreeView.Resources> </TreeView> Here I want to bind to the GetChildren method on each RootObject of the tree. EDIT Binding to an ObjectDataProvider doesn't seem to work because I'm binding to a list of items, and the ObjectDataProvider needs either a static method, or it creates it's own instance and uses that. For example, using Matt's answer I get: System.Windows.Data Error: 33 : ObjectDataProvider cannot create object; Type='RootObject'; Error='Wrong parameters for constructor.' System.Windows.Data Error: 34 : ObjectDataProvider: Failure trying to invoke method on type; Method='GetChildren'; Type='RootObject'; Error='The specified member cannot be invoked on target.' TargetException:'System.Reflection.TargetException: Non-static method requires a target.

    Read the article

  • Subdomain on different host

    - by mattsmith321
    Hi everyone! I'm trying to host a subdomain for my site with a different hosting company and I'm running into issues on how to set it up. Here are the specifics: - Domain is registered with GoDaddy. - Nameservers are pointing to DiscountASP.net where ASP.NET app has been happily running for couple of years. - Would like blog.mydomain.com to point to my account with DreamHost.com to take advantage of their LAMP stack. I have added blog.mydomain.com to DreamHost (after adding mydomain.com) via their control panel. I thought I would be able to add a subdomain entry on GoDaddy to point to DreamHost, but all they allow is blog.mydomain.com = new url. In theory I could just take our .biz or .net domain and host it on DreamHost but was hoping I could do it all with a subdomain. So, to summarize I'd like to know if what I want to do is feasible and if so, how do I go about it (given the constraints of GoDaddy, DiscountASP, & DreamHost). Thanks, Matt

    Read the article

  • PyGTK, Glade, Changing the window view and threads

    - by Gaunt Face
    Heya Everyone, Forgive me if this seems like a stupid question, just so far no where on the internet can I find someone offering a solution to this and I just wanted to get some feedback from someone with more experience than myself (I've only been using python, pyGTK and Glade for 2 days now). I have a UI window displaying and it updates with messages from a thread that is handling a bluetooth connection. This is fine and I have the application closing and running quite reliably, the problem is, after a bluetooth connection is made I wish to maintain the bluetooth thread (i.e. keep the connection going) but completely change the UI of the main window. Now the impression I am getting from pyGTK applications made from glade, is that the easiest thing to do is just open a new window. Is this really the best option? Can I cut the tree of widgets off at the root, maintaining the window widget but add on a new set of widgets from a separate glade file? If opening a new window is the best option, am I right in assuming that the bluetooth thread can be kept alive during this transition, providing I update any callbacks? Any help or pointers would be great. Cheers, Matt

    Read the article

  • Subversion - Do I need to reintegrate if I don't merge from trunk

    - by user314584
    Hi, I have read quite a bit about the need to re-integrate when you merge from a branch back to the trunk in SVN (This article was really helpful http://blogs.open.collab.net/svn/2008/07/subversion-merg.html). The problem seems to come from the fact that people are regularly updating the branch from the trunk which means that the final merge back is reflective. In my use-case, we want to create a release branch which will live for as long as it takes to stabilise the branch and fix any bugs. To maintain stability we don't want to merge up from the trunk but we do want to regularly merge fixes down from the release branch so that trunk gets all the bug fixes for free. We also don't want to wait until the end of QA to merge back to trunk. We therefore want to: 1.) Create the branch 2.) Make regular changes to the branch (and trunk) 3.) Merge back to trunk regularly (daily perhaps) Since we will never merge up from trunk I don't think that we need to worry about the problems that re-intergrating is designed to fix. Can anyone see a problem with this approach? Cheers, Matt

    Read the article

  • GAE modeling relationship options

    - by Sway
    Hi there, I need to model the following situation and I can't seem to find a consistent example on how to do it "correctly" for the google app engine. Suppose I've got a simple situation like the following: [Company] 1 ----- M [Stare] A company has one to many stores. Each store has an address made up of a address line 1, city, state, country, postcode etc. Ok. Lets say we need to create say an "Audit". An Audit is for a company and can be across one to many stares. So something like: [Audit] 1 ------ 1 [Company] 1 ------ M [Store] Now we need to query all of the "audits" based on the Store "addresses" in order to send the "Auditors" to the right locations. There seem to be numerous articles like this one: http://code.google.com/appengine/articles/modeling.html Which give examples of creating a "ContactCompany" model class. However they also say that you should use this kind of relationship only when you "really need to" and with "care" for performance. I've also read - frequently - that you should denormalize as much as possible thereby moving all of the "query-able" data into the Audit class. So what would you suggest as the best way to solve this? I've seen that there is an Expando class but I'm not sure if that is the "best" option for this. Any help or thoughts on this would be totally appreciated. Thanks in advance, Matt

    Read the article

  • Efficient alternative to merge() when building dataframe from json files with R?

    - by Bryan
    I have written the following code which works, but is painfully slow once I start executing it over thousands of records: require("RJSONIO") people_data <- data.frame(person_id=numeric(0)) json_data <- fromJSON(json_file) n_people <- length(json_data) for(lender in 1:n_people) { person_dataframe <- as.data.frame(t(unlist(json_data[[person]]))) people_data <- merge(people_data, person_dataframe, all=TRUE) } output_file <- paste("people_data",".csv") write.csv(people_data, file=output_file) I am attempting to build a unified data table from a series of json-formated files. The fromJSON() function reads in the data as lists of lists. Each element of the list is a person, which then contains a list of the attributes for that person. For example: [[1]] person_id name gender hair_color [[2]] person_id name location gender height [[...]] structure(list(person_id = "Amy123", name = "Amy", gender = "F", hair_color = "brown"), .Names = c("person_id", "name", "gender", "hair_color")) structure(list(person_id = "matt53", name = "Matt", location = structure(c(47231, "IN"), .Names = c("zip_code", "state")), gender = "M", height = 172), .Names = c("person_id", "name", "location", "gender", "height")) The end result of the code above is matrix where the columns are every person-attribute that appears in the structure above, and the rows are the relevant values for each person. As you can see though, some data is missing for some of the people, so I need to ensure those show up as NA and make sure things end up in the right columns. Further, location itself is a vector with two components: state and zip_code, meaning it needs to be flattened to location.state and location.zip_code before it can be merged with another person record; this is what I use unlist() for. I then keep the running master table in people_data. The above code works, but do you know of a more efficient way to accomplish what I'm trying to do? It appears the merge() is slowing this to a crawl... I have hundreds of files with hundreds of people in each file. Thanks! Bryan

    Read the article

  • ASP.NET MVC Map String Url To A Route Value Object

    - by mwgriffiths
    I am creating a modular ASP.NET MVC application using areas. In short, I have created a greedy route that captures all routes beginning with {application}/{*catchAll}. Here is the action: // get /application/index public ActionResult Index(string application, object catchAll) { // forward to partial request to return partial view ViewData["partialRequest"] = new PartialRequest(catchAll); // this gets called in the view page and uses a partial request class to return a partial view } Example: The Url "/Application/Accounts/LogOn" will then cause the Index action to pass "/Accounts/LogOn" into the PartialRequest, but as a string value. // partial request constructor public PartialRequest(object routeValues) { RouteValueDictionary = new RouteValueDictionary(routeValues); } In this case, the route value dictionary will not return any values for the routeData, whereas if I specify a route in the Index Action: ViewData["partialRequest"] = new PartialRequest(new { controller = "accounts", action = "logon" }); It works, and the routeData values contains a "controller" key and an "action" key; whereas before, the keys are empty, and therefore the rest of the class wont work. So my question is, how can I convert the "/Accounts/LogOn" in the catchAll to "new { controller = "accounts", action = "logon" }"?? If this is not clear, I will explain more! :) Matt This is the "closest" I have got, but it obviously wont work for complex routes: // split values into array var routeParts = catchAll.ToString().Split(new char[] { '/' }, StringSplitOptions.RemoveEmptyEntries); // feels like a hack catchAll = new { controller = routeParts[0], action = routeParts[1] };

    Read the article

  • Merge Multple Worksheets From Multple Workbooks

    - by Droter
    Hi, I have found multiple posts on merging data but I am still running into some problems. I have multiple files with multiple sheets. Example 2007-01.xls...2007-12.xls in each of these files are daily data on sheets labeled 01, 02, 03 ..... There are other sheets in the file so I can't just loop through all worksheets. I need to combine the daily data into monthly data, then all of the monthly data points into yearly. On the monthly data I need it to be added to the bottom of the page. I have added the file open changes for Excel 2007 Here is what I have so far: Sub RunCodeOnAllXLSFiles() Dim lCount As Long Dim wbResults As Workbook Dim wbMaster As Workbook Application. ScreenUpdating = False Application.DisplayAlerts = False Application.EnableEvents = False On Error Resume Next Set wbMaster = ThisWorkbook Dim oWbk As Workbook Dim sFil As String Dim sPath As String sPath = "C:\Users\test\" 'location of files ChDir sPath sFil = Dir("*.xls") 'change or add formats Do While sFil <> "" 'will start LOOP until all files in folder sPath have been looped through Set oWbk = Workbooks.Open(sPath & "\" & sFil) 'opens the file Set oWbk = Workbooks.Open(sPath & "\" & sFil) Sheets("01").Select ' HARD CODED FIRST DAY Range("B6:F101").Select 'AREA I NEED TO COPY Range("B6:F101").Copy wbMaster.Activate Workbooks("wbMaster").ActiveSheet.Range("B65536").End(xlUp)(2).PasteSpecial Paste:=xlValues Application.CutCopyMode = False oWbk.Close True 'close the workbook, saving changes sFil = Dir Loop ' End of LOOP On Error Goto 0 Application.ScreenUpdating = True Application.DisplayAlerts = True Application.EnableEvents = True End Sub Right now it can find the files and open them up and get to the right worksheet but when it tries to copy the data nothing is copied over. Thanks for your help, Matt

    Read the article

  • Java runs out of memory, even though I give it plenty!

    - by spitzanator
    Hey, folks. So, I'm running a java server (specifically Winstone: http://winstone.sourceforge.net/ ) Like this: java -server -Xmx12288M -jar /usr/share/java/winstone-0.9.10.jar --useSavedSessions=false --webappsDir=/var/servlets --commonLibFolder=/usr/share/java This has worked fine in the past, but now it needs to load a bunch more stuff into memory than it has before. The odd part is that, according to 'top', it has 15.0g of VIRT(ual memory) and it's RES(ident set) is 8.4g. Once it hits 8.4g, the CPU hangs at 100% (even though it's loading from disk), and eventually, I get Java's OutOfMemoryError. Presumably, the CPU hanging at 100% is Java doing garbage collection. So, my question is, what gives? I gave it 12 gigs of memory! And it's only using 8.2 gigs before it throws in the towel. What am I doing wrong? Oh, and I'm using java version "1.6.0_07" Java(TM) SE Runtime Environment (build 1.6.0_07-b06) Java HotSpot(TM) 64-Bit Server VM (build 10.0-b23, mixed mode) on Linux. Thanks, Matt

    Read the article

  • Static library not included in resulting LLVM executable

    - by Matthew Glubb
    Hi, I am trying to compile a c program using LLVM and I am having trouble getting some static libraries included. I have successfully compiled those static libraries using LLVM and, for example, libogg.a is present, as is ogg.l.bc. However, when I try to build the final program, it does not include the static ogg library. I've tried various compiler options with the most notable being: gcc oggvorbis.c -O3 -Wall -I$OV_DIR/include -l$OV_DIR/lib/libogg.a -l$OV_DIR/lib/libvorbis.a -o test.exe This results in the following output (directories shortened for brevity): $OV_DIR/include/vorbis/vorbisfile.h:75: warning: ‘OV_CALLBACKS_DEFAULT’ defined but not used $OV_DIR/include/vorbis/vorbisfile.h:82: warning: ‘OV_CALLBACKS_NOCLOSE’ defined but not used $OV_DIR/include/vorbis/vorbisfile.h:89: warning: ‘OV_CALLBACKS_STREAMONLY’ defined but not used $OV_DIR/include/vorbis/vorbisfile.h:96: warning: ‘OV_CALLBACKS_STREAMONLY_NOCLOSE’ defined but not used llvm-ld: warning: Cannot find library '$OV_DIR/lib/ogg.l.bc' llvm-ld: warning: Cannot find library '$OV_DIR/lib/vorbis.l.bc' WARNING: While resolving call to function 'main' arguments were dropped! I find this perplexing because $OV_DIR/lib/ogg.l.bc DOES exit, as does vorbis.l.bc and they are both readable (as are their containing directories) by everyone. Does anyone have any idea with what I am doing wrong? Thanks, Matt

    Read the article

  • Adding items to a combo box's internal list programatically.

    - by Andrew
    So, despite Matt's generous explanation in my last question, I still didn't understand and decided to start a new project and use an internal list. - (void)applicationDidFinishLaunching:(NSNotification *)aNotification { codesList = [[NSString alloc] initWithContentsOfFile: @".../.../codelist.txt"]; namesList = [[NSString alloc] initWithContentsOfFile: @".../.../namelist.txt"]; codesListArray = [[NSMutableArray alloc]initWithArray:[codesList componentsSeparatedByString:@"\n"]]; namesListArray = [[NSMutableArray alloc]initWithArray:[namesList componentsSeparatedByString:@"\n"]]; addTheDash = [[NSString alloc]initWithString:@" - "]; flossNames = [[NSMutableArray alloc]init]; [flossNames removeAllObjects]; for (int n=0; n<=[codesListArray count]; n++){ NSMutableString *nameBuilder = [[NSMutableString alloc]initWithFormat:@"%@", [codesListArray objectAtIndex:n]]; [nameBuilder appendString:addTheDash]; [nameBuilder appendString:[namesListArray objectAtIndex:n]]; [comboBoz addItemWithObjectValue:[NSMutableString stringWithString:nameBuilder]]; [nameBuilder release]; } } So this is my latest attempt at this and the list still isn't showing in my combo box. I've tried using the addItemsWithObjectValues outside the for loop along with the suggestions at this question: Is this the right way to add items to NSCombobox in Cocoa ? But still no luck. If you can't tell, I'm trying to combine two strings from the files with a hyphen in between them and then put that new string into the combo box. There are over 400 codes and matching names in the two files, so manually putting them in would be a huge chore, not to mention, I don't see what would be causing this problem. The compiler shows no warnings or errors, and in the IB, I have it set to use the internal list, but when I run it, the list is not populated unless I do it manually. Some things I thought might be causing it: Being in the applicationDidFinishLaunching: method Having the string and array variables declared as instance variables in the header (along with @property and @synth done to them) Messing around with using appendString multiple times with NSMutableArrays Nothing seems to be causing this to me, but maybe someone else will know something I don't. Thanks for the help.

    Read the article

  • Improve speed of a JOIN in MySQL

    - by ran2
    Dear all, I know there a similar threads around, but this is really the first time I realize that query speed might affect me - so it´s not that easy for me to really make the transfer from other folks problems. That being said I have using the following query successfully with smaller data, but if I use it on what are mildly large tables (about 120,000 records). I am waiting for hours. INSERT INTO anothertable (id,someint1,someint1,somevarchar1,somevarchar1) SELECT DISTINCT md.id,md.someint1,md.someint1,md.somevarchar1,pd.somevarchar1 from table1 AS md JOIN table2 AS pd ON (md.id = pd.id); Tables 1 and 2 contain about 120,000 records. The query has been running for almost 2 hours right now. Is this normal? Do I just have to wait. I really have no idea, but I am pretty sure that one could do it better since it´s my very first try. I read about indexing, but dont know yet what to index in my case? Thanks for any suggestions - feel free to point my to the very beginners guides ! best matt

    Read the article

  • Problems solving oddly acting labels in ie7.

    - by Qwibble
    Okay so this is sort of a double question so I'll split it into two. First part In modern browsers the main bold labels sit above their corresponding form elements, and align to the left as is expected. However in ie7, they randomly site 10-15px inset. I went through the developer tools and could find nothing to fix it. I've made sure all my margins and padding is reset so I don't really understand =S Here's the page demo - link Maybe some of you ie bug fixing genius's know what the problem is? =D Second part Again with labels, this time the in-line ones resident next to the check boxes and radio buttons. In modern browsers again, the side beside the form elements as expected, but not so in ie7 where they take a new line. I've tried floating, changing margins and everything but to no effect in sitting it in-line with the div.checker or div.radio that is created by the uniform Jquery plugin. Here's the page demo - link Sorry for troubling you with my ie7 problems, I know they arent the most fun to solve. Hopefully someone has the patience to help. Matt

    Read the article

  • Java object graph -> xml when direction of object association needs to be reversed.

    - by Sigmoidal
    An application I have been working on has objects with a relationship similar to below. In the real application both objects are JPA entities. class Underlying{} class Thing { private Underlying underlying; public Underlying getUnderlying() { return underlying; } public void setUnderlying(final Underlying underlying) { this.underlying = underlying; } } There is a requirement in the application to create xml of the form: <template> <underlying> <thing/> <thing/> <thing/> </underlying> </template> So we have a situation where the object graph expresses the relationship between Thing and Underlying in the opposite direction to how it's expressed in the xml. I expect to use JAXB to create the xml but ideally I don't want to have to create a new object hierarchy to reflect the associations in the xml. Is there any way to create xml of the form required from the entities in their current form (through the use of xml annotations or something)? I don't have any experience using JAXB but from the limited research I've done it doesn't seem like it's possible to reverse the direction of association in any straightforward way. Any help/advice would be greatly appreciated. One other option that has been suggested is to use XLST to transform the xml into the correct format. I have done no research on this topic as yet but I'll add to the question when I have some more info. Thanks, Matt.

    Read the article

  • MSSQL 2005 FOR XML

    - by Lima
    Hi, I am wanting to export data from a table to a specifically formated XML file. I am fairly new to XML files, so what I am after may be quite obvious but I just cant find what I am looking for on the net. The format of the XML results I need are: <data> <event start="May 28 2006 09:00:00 GMT" end="Jun 15 2006 09:00:00 GMT" isDuration="true" title="Writing Timeline documentation" image="http://simile.mit.edu/images/csail-logo.gif"> A few days to write some documentation </event> </data> My table structure is: name VARCHAR(50), description VARCHAR(255), startDate DATETIME, endDate DATETIME (I am not too interested in the XML fields image or isDuration at this point in time). I have tried: SELECT [name] ,[description] ,[startDate] ,[endTime] FROM [testing].[dbo].[time_timeline] FOR XML RAW('event'), ROOT('data'), type Which gives me: <data> <event name="Test1" description="Test 1 Description...." startDate="1900-01-01T00:00:00" endTime="1900-01-01T00:00:00" /> <event name="Test2" description="Test 2 Description...." startDate="1900-01-01T00:00:00" endTime="1900-01-01T00:00:00" /> </data> What I am missing, is the description needs to be outside of the event attributes, and there needs to be a tag. Is anyone able to point me in the correct direction, or point me to a tutorial or similar on how to accomplish this? Thanks, Matt

    Read the article

  • Is there a more efficent way to randomise a set of linq results?

    - by Matthew De'Loughry
    Hi just wondering if you could help I've produced a function to get back a random set of submission depnding on the amount passed to it, but I worry that even though it works now with a small amount of data when the a large amount is passed through it would become efficent and cause problems. Just wondering if you could suggest a more efficent way of doing the following: public List<Submission> GetRandomWinners(int id) { List<Submission> submissions = new List<Submission>(); int amount = (DbContext().Competitions .Where(s => s.CompetitionId == id).FirstOrDefault()).NumberWinners; for(int i = 1 ; i <= amount; i++) { bool added = false; while (!added) { bool found = false; var randSubmissions = DbContext().Submissions .Where(s => s.CompetitionId == id && s.CorrectAnswer).ToList(); int count = randSubmissions.Count(); int index = new Random().Next(count); foreach (var sub in submissions ) { if (sub == randSubmissions.Skip(index).FirstOrDefault()) found = true; } if (!found) { submissions.Add(randSubmissions.Skip(index).FirstOrDefault()); added = true; } } } return submissions; } As I say I have this fully working and bringing back the wanted result just I'm not liking the foreach and while checks in there and my head has just turned to mush now try to come up with the above soloution. Thanks Matt

    Read the article

  • Is Assert.Fail() considered bad practice?

    - by Mendelt
    I use Assert.Fail a lot when doing TDD. I'm usually working on one test at a time but when I get ideas for things I want to implement later I quickly write an empty test where the name of the test method indicates what I want to implement as sort of a todo-list. To make sure I don't forget I put an Assert.Fail() in the body. When trying out xUnit.Net I found they hadn't implemented Assert.Fail. Of course you can always Assert.IsTrue(false) but this doesn't communicate my intention as well. I got the impression Assert.Fail wasn't implemented on purpose. Is this considered bad practice? If so why? @Martin Meredith That's not exactly what I do. I do write a test first and then implement code to make it work. Usually I think of several tests at once. Or I think about a test to write when I'm working on something else. That's when I write an empty failing test to remember. By the time I get to writing the test I neatly work test-first. @Jimmeh That looks like a good idea. Ignored tests don't fail but they still show up in a separate list. Have to try that out. @Matt Howells Great Idea. NotImplementedException communicates intention better than assert.Fail() in this case @Mitch Wheat That's what I was looking for. It seems it was left out to prevent it being abused in another way I abuse it.

    Read the article

  • Using ant, rename a directory without knowing the full path?

    - by mixonic
    Howdy friends, Given a zipfile with an unknown directory, how can I rename or move that directory to a normalized path? <!-- Going to fetch some stuff --> <target name="get.remote"> <!-- Get the zipfile --> <get src="http://myhost.com/package.zip" dest="package.zip"/> <!-- Unzip the file --> <unzip src="package.zip" dest="./"/> <!-- Now there is a package-3d28djh3 directory. The part after package- is a hash and cannot be known ahead of time --> <!-- Remove the zipfile --> <delete file="package.zip"/> <!-- Now we need to rename "package-3d28djh3" to "package". My best attempt is below, but it just moves package-3d28djh3 into package instead of renaming the directory. --> <!-- Make a new home for the contents. --> <mkdir dir="package" /> <!-- Move the contents --> <move todir="package/"> <fileset dir="."> <include name="package-*/*"/> </fileset> </move> </target> I'm not much of an ant user, any insight would be helpful. Thanks much, -Matt

    Read the article

< Previous Page | 75 76 77 78 79 80 81  | Next Page >