Search Results

Search found 1039 results on 42 pages for 'jon mcintosh'.

Page 41/42 | < Previous Page | 37 38 39 40 41 42  | Next Page >

  • Why can I view my site over a 3G connection but not through my wifi?

    - by Jonathan
    So, I am sitting in my office with four computers on the same network and internet connection. Two of the computers can visit this particular website. Two of the computer get a message "Google Chrome could not find". I have tried FF and IE also with the same problem. I can view the site 90% of the time on two of the working computers although the site seems slow and sometimes I also get the same errors as the other two computers. I have flushed the DNS, reset the router, tested the site on other peoples computers with success. Is this likely to be a site issue, an ISP issue, a hosting issue? Any advice is greatly appreciated. Here is the ping from the working machine: C:\Users\Jon>ping www.balihaicruises.com Pinging www.balihaicruises.com [208.113.173.102] with 32 bytes of data: Reply from 208.113.173.102: bytes=32 time=331ms TTL=47 Reply from 208.113.173.102: bytes=32 time=327ms TTL=47 Reply from 208.113.173.102: bytes=32 time=326ms TTL=47 Reply from 208.113.173.102: bytes=32 time=329ms TTL=47 Ping statistics for 208.113.173.102: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 326ms, Maximum = 331ms, Average = 328ms Traceroute: Tracing route to www.balihaicruises.com [208.113.173.102] over a maximum of 30 hops: 1 1 ms 17 ms 3 ms 192.168.1.1 2 42 ms 37 ms 36 ms 180.254.224.1 3 39 ms 47 ms 40 ms 180.252.1.69 4 36 ms 616 ms 57 ms 61.94.115.221 5 84 ms 76 ms 80 ms 180.240.191.98 6 73 ms 80 ms 72 ms 180.240.191.97 7 157 ms 143 ms 116 ms 180.240.190.82 8 115 ms 113 ms 120 ms ae1-123.hkg11.ip4.tinet.net [183.182.80.93] 9 331 ms 332 ms 335 ms xe-3-2-1.was14.ip4.tinet.net [89.149.184.30] 10 327 ms 330 ms 331 ms internap-gw.ip4.tinet.net [77.67.69.254] 11 437 ms 415 ms 350 ms border10.pc2-bbnet2.wdc002.pnap.net [216.52.127.73] 12 322 ms 823 ms 398 ms dreamhost-2.border10.wdc002.pnap.net [216.52.125.74] 13 328 ms 336 ms 326 ms ip-208-113-156-4.dreamhost.com [208.113.156.4] 14 326 ms 328 ms 336 ms ip-208-113-156-14.dreamhost.com [208.113.156.14] 15 327 ms 331 ms 333 ms apache2-udder.crisp.dreamhost.com [208.113.173.102] And then for the machine that doesn't work: C:\Users\Microsoft>ping www.balihaicruises.com Ping request could not find host www.balihaicruises.com. Please check the name and try again. C:\Users\Microsoft>tracert www.balihaicruises.com Unable to resolve target system name www.balihaicruises.com.

    Read the article

  • SSAS Compare: an intern’s journey

    - by Red Gate Software BI Tools Team
    About a month ago, David mentioned an intern working in the BI Tools Team. That intern happens to be me! In five weeks’ time, I’ll start my second year of Computer Science at the University of Cambridge and be a full-time student again, but for the past eight weeks, I’ve been living a completely different life. As Jon mentioned before, the teams here at Red Gate are small and everyone (including the interns!) is responsible for the product as a whole. I’ve attended planning sessions, UX tests, daily meetings, and everything else a full-time member of the team would; I had as much say in where we would go next with the product as anyone; I was able to see that what I was doing was an important part of the product from the feedback we got in the UX tests. All these things almost made me forget that this is just an internship and not my full-time job. First steps at Red Gate Being based in Cambridge, Red Gate has many Cambridge university graduates working for them. They also hire some Cambridge undergraduates for internships each summer. With its popularity with university graduates and its great working environment, Red Gate has managed to build up a great reputation. When I thought of doing an internship here in Cambridge, Red Gate just seemed to be the obvious choice for my first real work experience. On my first day at Red Gate, David, the lead developer for SSAS Compare, helped me settle in and explained what I’d be doing. My task was to improve the user experience of displaying differences between MDX scripts by syntax highlighting, script formatting, and improving the difference identification in the first place. David suggested how I should approach the problem, but left all the details and design decisions to me. That was when I realised how much independence and responsibility I’d have. What I’ve done If you launch the latest version of SSAS Compare and drill down to an MDX script difference, you can see the changes that have been made. In earlier versions, you could only see the scripts in plain text on both sides — either in black or grey, depending on whether they were the same or not. However, you couldn’t see exactly where the scripts were different, which was especially annoying when the two scripts were large – as they often are. Furthermore, if parts of the two scripts were formatted differently, they seemed to be different but were actually the same, which caused even more confusion and made it difficult to see where the differences were. All these issues have been fixed now. The two scripts are automatically formatted by the tool so that if two things are syntactically equivalent, they look the same – including case differences in keywords! The actual difference is highlighted in grey, which makes them easy to spot. The difference identification has been improved as well, so two scripts aren’t identified as different if there’s just a difference in meaningless whitespace characters, or when you have “select” on one side and “SELECT” on the other. We also have syntax highlighting, which makes it easier to read the scripts. How I did it In order to do the formatting properly, we decided to parse the MDX scripts. After some investigation into parser builders, I decided to go with the GOLD Parser builder and the bsn-goldparser .NET engine. GOLD Parser builder provides a fairly nice GUI to write, build, and test grammar in. We also liked the idea of separating the grammar building from parsing a text. The bsn-goldparser is one of many .NET engines for GOLD, and although it doesn’t support the newest features of GOLD Parser, it has “the ability to map semantic action classes to terminals or reduction rules, so that a completely functional semantic AST can be created directly without intermediate token AST representation, and without the need for glue code.” That makes it much easier for us to change the implementation in our program when we change the grammar. As bsn-goldparser is open source, and I wanted some more features in it, I contributed two new features which have now been merged to the project. Unfortunately, there wasn’t an MDX grammar written for GOLD already, so I had to write it myself. I was referencing MSDN to get the formal grammar specification, but the specification was all over the place, so it wasn’t that easy to implement and find. We’re aware that we don’t yet fully support all valid MDX, so sometimes you’ll just see the MDX script difference displayed the old way. In that case, there is some grammar construct we don’t yet recognise. If you come across something SSAS Compare doesn’t recognise, we’d love to hear about it so we can add it to our grammar. When some MDX script gets parsed, a tree is produced. That tree can then be processed into a list of inlines which deal with the correct formatting and can be outputted to the screen. Doing all this has led me to many new technologies and projects I haven’t worked with before. This was my first experience with C# and Visual Studio, although I have done things in Java before. I have learnt how to unit test with NUnit, how to do dependency injection with Ninject, how to source-control code with SVN and Mercurial, how to build with TeamCity, how to use GOLD, and many other things. What’s coming next Sadly, my internship comes to an end this week, so there will be less development on MDX difference view for a while. But the team is going to work on marking the differences better and making it consistent with difference indication in the top part of comparison window, and will keep adding support for more MDX grammar so you can see the differences easily in every comparison you make. So long! And maybe I’ll see you next summer!

    Read the article

  • Learn About Oracle’s Strategy for a Simple, Modern User Experience at OpenWorld 2012

    - by Applications User Experience
    By Kathy Miedema, Oracle Applications User Experience If you’re interested in what the best possible user experience looks like, you’ll want to hear what Oracle’s Applications User Experience team is planning for OpenWorld 2012, Sept. 30-Oct. 4 in San Francisco. This year, we will talk Fusion, Fusion, Fusion. We were among the first to show Oracle Fusion Applications in the last couple of years, and we’ll be showing it again this year so you can see what Oracle is planning for the next generation of enterprise applications. Attend our sessions to learn more about the user experience strategy in which Oracle is investing. Simplicity is the driving force behind the demos that we are unveiling now, which you can see at OpenWorld. We want to create opportunities for productivity and efficiency, and deliver enterprise data across devices to help you do your work in the way best suited to your job and needs, said Jeremy Ashley, Vice President, Oracle Applications User Experience. You can see the new look for Fusion Applications at a general session led by Ashley at 3:30 p.m. on Wednesday, Oct. 3. You’ll also have the chance to learn more about tailoring in Oracle Fusion Applications, and gain a new understanding of the investment in the user experience behind Fusion Applications at our sessions (see session information below). Inside the Oracle Applications User Experience team’s on-site lab at Oracle OpenWorld 2011. Head to the demogrounds to see new demos from the Applications User Experience team, including the new look for Fusion Applications and what we’re building for mobile platforms. Take a spin on our eye tracker, a very cool tool that we use to research the usability of a particular design. Visit the Usable Apps OpenWorld page to find out where our demopods will be located. We are also recruiting participants for our on-site lab, in which we gather feedback on new user experience designs, and taking reservations for a charter bus that will bring you to Oracle headquarters for a lab tour Thursday, Oct. 4, or Friday, Oct. 5. Tours leave at 10 a.m. and 1:45 p.m. from the Moscone Center in San Francisco. You’ll see more of our newest designs at the lab tour, and some of our research tools in action. Can’t participate in a customer feedback session or take a lab tour this time around? Visit Usable Apps to participate or book a tour another time. For more information on any OpenWorld sessions, check the content catalog – also available at www.oracle.com/openworld. For information on Applications User Experience (Apps UX) sessions and activities, go to the Usable Apps OpenWorld page. APPS UX OPENWORLD SESSIONS Oracle’s Roadmap to a Simple, Modern User Experience Presenter: Jeremy Ashley, Vice President Applications User Experience, Oracle; with Debra Lilley, Fujitsu Consulting; Basheer Khan, Innowave; and Edward Roske, InterRelSession ID: CON9467Date: Wednesday, Oct. 3 Time: 3:30 - 4:30 p.m.Location: Moscone West - 3002/3004 Jeremy Ashley Oracle Fusion Applications: Transforming Insight into Action Presenters: Killian Evers and Kristin Desmond, OracleSession ID: CON8718Date: Thursday, Oct. 4Time: 11:15 a.m. - 12:15 p.m.Location: Moscone West - 2008 “FRIENDS OF UX” OPENWORLD SESSIONS Sessions by the Oracle Usability Advisory Board (OUAB) members: Advances in Oracle Enterprise Governance, Risk, and Compliance Manager  Presenters: Koen Delaure, KPMG Advisory NV, and Oracle Usability Advisory Board member; Russell Stohr, Oracle Session ID: CON9389Date: Tuesday, Oct. 2Time: 1:15 - 2:15 p.m.Location: Palace Hotel - Concert Optimize Oracle E-Busines Suite Procure-to-Pay: Cut Inefficiences/Fraud with Oracle GRC Apps Presenters: Koen Delaure, KPMG Advisory NV, and Solveig Wagner, Seadrill Management AS, both Oracle Usability Advisory Board members; and Swarnali Bag, OracleSession ID: CON9401Date: Monday, Oct. 1Time: 12:15 - 1:15 p.m.Location: Intercontinental - Sutter Showcase of JD Edwards EnterpriseOne Mobility Presenters: Jon Wells, Westmoreland Coal Co., Oracle Usability Advisory Board member; Rob Mills and Liz Davson, Town of Oakville; Keith Sholes and Louise Farner, Oracle Session ID: CON9123Date: Tuesday, Oct. 2Time: 1:15 - 2:15 p.m.Location: InterContinental - Grand Ballroom B Sessions by the Fusion User Experience Adovcates (FXA) Usability and Features of Oracle Fusion Applications, Built upon Oracle Fusion Middleware Presenters: Debra Lilley, Fujitsu Consulting and Oracle Usability Advisory Board member; John King, King Training ResourcesSession ID: UGF10371Date: Sunday, Sept. 30Time: 11 a.m. - 11:45 a.m. Location: Moscone West – 2010 Ten Things to Love About Oracle Fusion Project Portfolio Management  Presenter: Floyd Teter, EiS TechnologiesSession ID: CON6021Date: Tuesday, Oct. 2Time: 10:15 - 11:15 a.m.Location: Moscone West – 2003

    Read the article

  • Subscribable World Cup 2010 Calendar

    - by jamiet
    I bang on quite a lot on this blog about ways in which data can get published over the web and one of the most interesting ways, in my opinion, of publishing data in a structured manner that is well understood is to use the iCalendar specification. There isn’t much information in the world that doesn’t have some concept of “when” so iCalendar is a great way of distributing that information. You have probably used iCalendar at some point without even knowing about it. All files with a .ics suffix are iCalendar format files and that is why you can happily import them into Outlook, Hotmail Calendar, Google Calendar etc… where they can be parsed and have the semantic data (when, where and who) extracted from them. Importing of iCalendar format data is really only half the trick though; in my opinion the real value of iCalendar-formatted calendar is the ability to subscribe to them. Subscribing has a simple benefit over importing but that single benefit is of massive importance: a subscriber to an iCalendar calendar can periodically check to see if any updates have been made and, if they have, automatically update the local copy. The real benefit to the user is the productivity gain – a single update to an iCalendar means that all subscribers are automatically made aware of the change and there is zero effort on the part of the subscriber; as my former colleague Howard van Rooijen is fond of saying, “work smarter not harder” – nowhere is this edict more ably demonstrated than subscribing versus importing of calendars. If you want to read some more thoughts about iCalendar then go and read my past blog post Calendar syndication - My big hope for 2009's breakthrough technology or better still go and seek out Jon Udell who speaks very authoritatively on the issue of iCalendar. With this subject of iCalendar on my mind I was interested to discover (via Steve Clayton’s blog post Download the world cup fixtures) that the BBC had made a .ics file available containing all of the matches in the upcoming World Cup. As you can probably guess this was a file that was made available so that it could be imported into your calendar of choice. It had one obvious downside though, right now nobody knows who is going to be playing in the knock-out stages so the calendar looks like this: with no teams being named after 25th June. How much more useful would this calendar have been if the BBC had made it possible to subscribe to the calendar instead, thus the calendar could be updated with the teams for the knock out stages when they are known and every subscriber would have a permanently up-to-date record of all the fixtures in their calendar. Better still, the calendar could be updated with match results as well or perhaps even post a match report from the BBC sport pages; when calendars are made subscribable a sea of opportunity opens up for distribution of information. So with that in mind I have decided to go one better than the BBC. I have imported their .ics into a brand new Hotmail calendar and made it publicly available at the following URLs: HTML http://cid-dc1ed121af0476be.calendar.live.com/calendar/World+Cup+2010/index.html iCalendar webcal://cid-dc1ed121af0476be.calendar.live.com/calendar/World+Cup+2010/calendar.ics The link you’re really interested in is the second one - click on that and it should open up in your calendar software of choice. Or, if you want to view it in an online calendar such as Hotmail Calendar or Google Calendar, copy and paste that URL into the appropriate place. Some people have told me they’re having trouble with the iCalendar link in which case hit the HTML link and then click “View ICS” at the resultant web page: I shall endeavour to keep the calendar updated throughout the World Cup and even if I don’t you’re no worse off than if you had imported the BBC’s .ics file so why not give it a try? If I do keep it up to date then you will have a permanent record of the 2010 World Cup available in your calendar. Forever. If you have your calendar synced to your smartphone then you’ll be carrying match reports around with you without you having to do a single thing. Surely that’s worth a quick click isn’t it?   If you have any thoughts let me have them in the comments below. Thanks for reading. @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Five Things Learned at the BSR Conference in San Francisco on Nov 2nd-4th

    - by Evelyn Neumayr
    The BSR Conference 2011—“Redefining Leadership”—held from Nov 2nd to Nov 4th in San Francisco, with Oracle as one of the main sponsors, saw senior business executives, civil society representatives, and other experts from around the world gathering to share strategies and insights on the future of sustainability. The general conference sessions kicked off on November 2nd with a plenary address by former U.S. Vice President Al Gore. Other sessions were presented by CEOs of the caliber of Carl Bass (Autodesk), Brian Dunn (Best Buy), Carlos Brito (Anheuser-Busch InBev) and Ofra Strauss (Strauss Group). Here are five key highlights from the conference: 1.      The main leadership challenge is integrating sustainability into core business functions and overcoming short-termism. The “BSR GlobeScan State of Sustainable Business Poll 2011” - a survey of nearly 500 business leaders from 300 member companies - shows that 84% of respondents are optimistic that global businesses will embrace CSR/sustainability as part of their core strategies and operations in the next five years but consider integrating sustainability into their core business functions the key challenge. It is still difficult for many companies that are committed to the sustainability agenda to find investors that understand the long-term implications and as Al Gore said “Many companies are given the signal by the investors that it is the short term results that matter and that is a terribly debilitating force in the market.” 2.      Companies are required to address increasing compliance requirements and transparency in their supply chain, especially in relation with conflict minerals legislation and water management. The Dodd-Frank legislation, OECD guidelines, and the upcoming Securities and Exchange Commission (SEC) rules require companies to monitor upstream the sourcing of tin, tantalum, tungsten, and gold, but given the complexity of this issue companies need to collaborate and partner with peer companies in their industry as well as in other industries to understand how to address conflict minerals in their supply chains. The Institute of Public and Environmental Affairs’ (IPE) China Water Pollution Map enables the public to access thousands of environmental quality, discharge, and infraction records released by various government agencies. Empowered with this information, the public has the opportunity to place greater pressure on polluting companies to comply with environmental standards and create solutions to improve their performance. 3.      A new standard for reporting on supply chain greenhouse gas emissions is available. The New “Scope 3” Supply Chain Greenhouse Gas Inventory Standard, released on October 4th 2011, is the only international greenhouse gas emissions standard that accounts for the full lifecycle of a company’s products. It provides a framework for companies to account for indirect emissions outside of energy use, such as transportation, manufacturing, and distribution, and it incorporates both upstream and downstream impacts of a product. With key investors now listing supplier vulnerability to rising energy prices and disruptions of service as a key concern, greenhouse gas (GHG) management isn’t just for leading companies but a necessity for any business. 4.      Environmental, social, and corporate governance (ESG) reporting is becoming increasingly important to investors and other stakeholders. While European investors have traditionally driven the ESG agenda, U.S. investors are increasingly including ESG data in their analyses. This trend will likely increase as stakeholders continue to demand that an ESG lens be applied to their investments. Investors are increasingly looking to partner on sustainability, as they see the benefits of ESG providing significant returns on investment. 5.      Software companies are offering an increasing variety of solutions to help drive changes and measure performance internally, in supply chains, and across peer companies. The significant challenge is how to integrate different software systems to facilitate decision-making based on a holistic understanding of trade-offs. Jon Chorley, Chief Sustainability Officer and Vice President, Supply Chain Management Product Strategy at Oracle was a panelist in the “Trends in Sustainability Software” session and commented that, “How we think about our business decisions really comes down to how we think about cost. And as long as we don’t assign a cost to things that have an environmental impact or social impact, then we make decisions based on incomplete information. If we could include that in the process that determines ‘Is this product profitable? we would then have a much better decision.” For more information on BSR visit www.brs.org. You can also view highlights of the plenary session at http://www.bsr.org/en/bsr-conference/session-summaries/2011. Oracle is proud to be a sponsor of this BSR conference. By Elena Avesani, Principal Product Strategy Manager, Oracle          

    Read the article

  • How to Save Hundreds or Thousands of Dollars on Cell Phone Service

    - by Chris Hoffman
    Cell phone contracts are bad. You get a seemingly cheap phone up front, but you more than pay for the cost of the phone over two years. Prepaid phone plans are surging in North America for a reason. Prepaid phone plans will be cheaper and more flexible than traditional contracts with big carriers for many people. However much you use your phone, there’s a good chance you can save money with a prepaid service. No More Contracts Here’s how cell phone service typically works in North America: You get a subsidized phone for “free”, $99, or $199. You sign up for a two-year contract and more than pay back the cost of that phone over the length of the contract. This is similar to leasing something or purchasing it on a credit card and paying it back over two years — you spend less up front, but you’re paying more in the long run. But this isn’t the only option. You could opt for a cheaper prepaid service that doesn’t lock you into a contract. If you don’t use your phone much, you could just pay for what you use and avoid the hefty cell phone bills. If you use your phone a lot, you could get a cheaper plan, too. Now, this certainly isn’t for everyone. If you want the latest iPhone or Galaxy smartphone every two years and require a 4G data connection, prepaid services may not be for you. On the other hand, if you don’t need the latest phone, you can save money here. You can also save a huge amount of money if you don’t use your phone much. Phone Options When you choose your prepaid or contract-free service, you’ll often be able to purchase a phone from them. You’ll generally be able to find dirt-cheap dumbphones and the cheapest, slowest Android phones for not very much money. If you are able to buy a top-of-the-line smartphone, you’ll have to pay the full, unsubsidized price. That’s $649 for either an iPhone 5S or Samsung Galaxy S4. Whatever phones the service provider offers, you could always buy a phone elsewhere — for example, you could buy an unsubsidized iPhone direct from Apple and then take it to your cell phone service of choice. Most services will allow you to get a SIM card and pop it into your existing phone rather than purchasing a phone. If you can get a hand-me-down smartphone, you can often save quite a bit of money. For example, you may have a family member upgrading from an iPhone 4S to an iPhone 5S. You could take their phone to a prepaid carrier and have a nicer phone on a cheap cell phone plan. If you brought an old smartphone to a big carrier like AT&T or Verizon, they wouldn’t give you a discount on your monthly plan. You’d have to pay the same amount of money every month as if you had gotten a subsidized phone. Google’s Nexus phones are also great options for people looking to buy smartphones and pay up-front. Google’s Nexus 4 offered a modern, almost top-of-the-line Android smartphone experience at $299 or $349 when it came out last year. Google will soon be releasing the Nexus 5 and it’s expected to be priced at $349. That’s certainly a lot more than a cheap phone, but it’s a fairly high-end smartphone at almost half the price of an iPhone 5S or Galaxy S4. Nexus phones can be purchased online from Google’s Play Store. Service Options When choosing a service, you need to consider what you actually use. If you’re someone who only uses your phone rarely, you can get plans that will allow you to pay as little as a few dollars per month. If you’re someone who’s usually in range of Wi-Fi, you may not need much data at all. If you want a plan with unlimited talk, texting, and data usage, you can get it for much cheaper than you’d pay on a major carrier like AT&T. The options here range from pay-as-you-go plans, like the ones offered by T-Mobile, which allow you to put a certain amount of money in and only drain that balance when you actually use minutes, texts, or data. If you only make a few calls and send a few texts per month, you’d only pay a few bucks. On the other end, Walmart’s Straight Talk service is a popular option that offers unlimited talk, texting, and data at $45 per month. Which service is right for you depends on a lot of things, including your usage and what each network’s coverage is like in your area. You’ll want to do some research of your own before choosing a service. Prepaid services also offer you even more flexibility after you choose one. If you’re not happy or a better deal comes along, you can switch — you’re not locked into your service for two years and you won’t pay an early termination fee. Image Credit: Intel Free Press on Flickr, Jon Fingas on Flickr, John Karakatsanis on Flickr, kendalkinggroup on Flickr     

    Read the article

  • UppercuT v1.0 and 1.1&ndash;Linux (Mono), Multi-targeting, SemVer, Nitriq and Obfuscation, oh my!

    - by Robz / Fervent Coder
    Recently UppercuT (UC) quietly released version 1 (in August). I’m pretty happy with where we are, although I think it’s a few months later than I originally planned. I’m glad I held it back, it gave me some more time to think about some things a little more and also the opportunity to receive a patch for running builds with UC on Linux. We also released v1.1 very recently (December). UppercuT v1 Builds On Linux Perhaps the most significant changes to UC going v1 is that it now supports builds on Linux using Mono! This is thanks mostly to Svein Ackenhausen for the patches and working with me on getting it all working while not breaking the windows builds!  This means you can use mono on Windows or Linux. Notice the shell files to execute with Linux that come as part of UC now. Multi-Targeting Perhaps one of the hardest things to do that requires an automated build is multi-targeting. At v1 this is early, and possibly prone to some issues, but available.  We believe in making everything stupid simple, so it’s as simple as adding a comma to the microsoft.framework property. i.e. “net-3.5, net-4.0” to suddenly produce both framework builds. When you build, this is what you get (if you meet each framework’s requirements): At this time you have to let UC override the build location (as it does by default) or this will not work.  Semantic Versioning By now many of you have been using UppercuT for awhile and have watched how we have done versioning. Many of you who use git already know we put the revision hash in the informational/product version as the last octet. At v1, UppercuT has adopted the semantic versioning scheme. What does that mean? This is a short read, but a good one: http://SemVer.org SemVer (Semantic Versioning) is really using versioning what it was meant for. You have three octets. Major.Minor.Patch as in 1.1.0.  UC will use three different versioning concepts, one for the assembly version, one for the file version, and one for the product version. All versions - The first three octects of the version are owned by SemVer. Major.Minor.Patch i.e.: 1.1.0 Assembly Version - The assembly version would much closer follow SemVer. Last digit is always 0. Major.Minor.Patch.0 i.e: 1.1.0.0 File Version - The file version occupies the build number as the last digit. Major.Minor.Patch.Build i.e.: 1.1.0.2650 Product/Informational Version - The last octect of your product/informational version is the source control revision/hash. Major.Minor.Patch.RevisionOrHash i.e. (TFS/SVN): 1.1.0.235 i.e. (Git/HG): 1.1.0.a45ace4346adef0 SemVer is not on by default, the passive versioning scheme is still in effect. Notice that version.use_semanticversioning has been added to the UppercuT.config file (and version.patch in support of the third octet): Gems Support Gems support was added at v1. This will probably be deprecated as some point once there is an announced sunset for Nu v1. Application gems may keep it around since there is no alternative for that yet though (CoApp would be a possible replacement). Nitriq Support Nitriq is a code analysis tool like NDepend. It’s built by Mr. Jon von Gillern. It uses LINQ query language, so you can use a familiar idiom when analyzing your code base. It’s a pretty awesome tool that has a free version for those looking to do code analysis! To use Nitriq with UC, you are going to need the console edition.  To take advantage of Nitriq, you just need to update the location of Nitriq in the config: Then add the nitriq project files at the root of your source. Please refer to the Nitriq documentation on how these are created. UppercuT v1.1 Obfuscation One thing I started looking into was an easy way to obfuscate my code. I came across EazFuscator, which is both free and awesome. Plus the GUI for it is super simple to use. How do you make obfuscation even easier? Make it a convention and a configurable property in the UC config file! And the code gets obfuscated! Closing Definitely get out and look at the new release. It contains lots of chocolaty (sp?) goodness. And remember, the upgrade path is almost as simple as drag and drop!

    Read the article

  • Removing expired certificates from LDS (new ver of ADAM)

    - by jonthebrewer
    Hi all. This is my situation: We are in the process of replacing a certificate store currently hosted on Sun's iPlanet with Microsoft's Lightweight Directory Services (new version of ADAM with Server 2008). These certificates have been imported into LDS into an application partition (say o=myorg, C=AU). Under this structure I have around 40,000 OU's each one representing a customer under each customers OU are one or more user (iNetOrg) objects (around 60,000 in all). In each user are one or more certificates in the UserCertificate attribute. A combination of in-house written application code and proprietory PKI code reads and publishes these certficates to validate financial transactions. As the LDAP path of the certificates is stored within the customer certificates (and within the application code) and there is zero appetite for changing any of the code, I have had to pick up the iPlanet directory as a whole and dump it in LDS in the same structure. (I will not be using or hosting a Microsoft CA, just implementing an LDAP compliant directory to host these certificates) We have fully tested the application using the data in LDS and everything works fine - here is my dilema and question (finally, phew!) There was no process put in place for removing revoked or expired certificates, consequently the vast majority of the data is completely useless, the system has been running for about 8 years! I have done a quick analysis and I estimate that at least 80% of the data is no longer valid. As I am taking on responsibility for managing the directory I would like to start with a clean directory. Does anyone have any idea how I can cleanup these expired certificates. I am not a highly experienced scripter but have some background in VB. I have been researching the use of CAPICOM and have a feeling this may be able to be used but in exactly what way I am not sure?? I would prefer to write a script that I could specify an expiration date (say any certs that expired prior to 2010) then run against the LDS paritition. This way I can reuse the script periodically to cleanup the directory (as mentioned above - I have no way to adjust the applications that are writing the certs, this is with a third party). Another, less attractive, alternative is to massage the LDIF file (2.7 million lines!) to rip the certs out prior to the import Any help and advice MUCH appreciated. Cheers Jon

    Read the article

  • Reflection - SetValue of array within class?

    - by Jaymz87
    OK, I've been working on something for a while now, using reflection to accomplish a lot of what I need to do, but I've hit a bit of a stumbling block... I'm trying to use reflection to populate the properties of an array of a child property... not sure that's clear, so it's probably best explained in code: Parent Class: Public Class parent Private _child As childObject() Public Property child As childObject() Get Return _child End Get Set(ByVal value As child()) _child = value End Set End Property End Class Child Class: Public Class childObject Private _name As String Public Property name As String Get Return _name End Get Set(ByVal value As String) _name = value End Set End Property Private _descr As String Public Property descr As String Get Return _descr End Get Set(ByVal value As String) _descr = value End Set End Property End Class So, using reflection, I'm trying to set the values of the array of child objects through the parent object... I've tried several methods... the following is pretty much what I've got at the moment (I've added sample data just to keep things simple): Dim Results(1) As String Results(0) = "1,2" Results(1) = "2,3" Dim parent As New parent Dim child As childObject() = New childObject() {} Dim PropInfo As PropertyInfo() = child.GetType().GetProperties() Dim i As Integer = 0 For Each res As String In Results Dim ResultSet As String() = res.Split(",") ReDim child(i) Dim j As Integer = 0 For Each PropItem As PropertyInfo In PropInfo PropItem.SetValue(child, ResultSet(j), Nothing) j += 1 Next i += 1 Next parent.child = child This fails miserably on PropItem.SetValue with ArgumentException: Property set method not found. Anyone have any ideas? @Jon :- Thanks, I think I've gotten a little further, by creating individual child objects, and then assigning them to an array... The issue is now trying to get that array assigned to the parent object (using reflection). It shouldn't be difficult, but I think the problem comes because I don't necessarily know the parent/child types. I'm using reflection to determine which parent/child is being passed in. The parent always has only one property, which is an array of the child object. When I try assigning the child array to the parent object, I get a invalid cast exception saying it can't convert Object[] to . EDIT: Basically, what I have now is: Dim PropChildInfo As PropertyInfo() = ResponseObject.GetType().GetProperties() For Each PropItem As PropertyInfo In PropChildInfo PropItem.SetValue(ResponseObject, ResponseChildren, Nothing) Next ResponseObject is an instance of the parent Class, and ResponseChildren is an array of the childObject Class. This fails with: Object of type 'System.Object[]' cannot be converted to type 'childObject[]'.

    Read the article

  • How do I prevent the concurrent execution of a javascript function?

    - by RyanV
    I am making a ticker similar to the "From the AP" one at The Huffington Post, using jQuery. The ticker rotates through a ul, either by user command (clicking an arrow) or by an auto-scroll. Each list-item is display:none by default. It is revealed by the addition of a "showHeadline" class which is display:list-item. HTML for the UL Looks like this: <ul class="news" id="news"> <li class="tickerTitle showHeadline">Test Entry</li> <li class="tickerTitle">Test Entry2</li> <li class="tickerTitle">Test Entry3</li> </ul> When the user clicks the right arrow, or the auto-scroll setTimeout goes off, it runs a tickForward() function: function tickForward(){ var $active = $('#news li.showHeadline'); var $next = $active.next(); if($next.length==0) $next = $('#news li:first'); $active.stop(true, true); $active.fadeOut('slow', function() {$active.removeClass('showHeadline');}); setTimeout(function(){$next.fadeIn('slow', function(){$next.addClass('showHeadline');})}, 1000); if(isPaused == true){ } else{ startScroll() } }; This is heavily inspired by Jon Raasch's A Simple jQuery Slideshow. Basically, find what's visible, what should be visible next, make the visible thing fade and remove the class that marks it as visible, then fade in the next thing and add the class that makes it visible. Now, everything is hunky-dory if the auto-scroll is running, kicking off tickForward() once every three seconds. But if the user clicks the arrow button repeatedly, it creates two negative conditions: Rather than advance quickly through the list for just the number of clicks made, it continues scrolling at a faster-than-normal rate indefinitely. It can produce a situation where two (or more) list items are given the .showHeadline class, so there's overlap on the list. I can see these happening (especially #2) because the tickForward() function can run concurrently with itself, producing different sets of $active and $next. So I think my question is: What would be the best way to prevent concurrent execution of the tickForward() method? Some things I have tried or considered: Setting a Flag: When tickForward() runs, it sets an isRunning flag to true, and sets it back to false right before it ends. The logic for the event handler is set to only call tickForward() if isRunning is false. I tried a simple implementation of this, and isRunning never appeared to be changed. The jQuery queue(): I think it would be useful to queue up the tickForward() commands, so if you clicked it five times quickly, it would still run as commanded but wouldn't run concurrently. However, in my cursory reading on the subject, it appears that a queue has to be attached to the object its queue applies to, and since my tickForward() method affects multiple lis, I don't know where I'd attach it.

    Read the article

  • Advice on displaying and allowing editing of data using ASP.NET MVC?

    - by Remnant
    I am embarking upon my first ASP.NET MVC project and I would like to get some input on possible ways to display database data and general best practice. In short, the body of my webpage will show data from my database in a table like format, with each table row showing similar data. For example: Name Age Position Date Joined Jon Smith 23 Striker 18th Mar 2005 John Doe 38 Defender 3rd Jan 1988 In terms of functionality, primarily I’d like to give the user the ability to edit the data and, after the edit, commit the edit to the database and refresh the view.The reason I want to refresh the view is because the data is date ordered and I will need to re-sort if the user edits a date field. My main question is what architecture / tools would be best suited to this fulfil my requirements at a high level? From the research I have done so far my initial conclusions were: ADO.NET for data retrieval. This is something I have used before and feel comfortable with. I like the look of LINQ to SQL but don’t want to make the learning curve any steeper for my first outing into MVC land just yet. Partial Views to create a template and then iterate through a datatable that I have pulled back from my database model. jQuery to allow the user to edit data in the table, error check edited data entries etc. Also, my intial view was that caching the data would not be a key requirement here. The only field a user will be able to update is the field and, if they do, I will need to commit that data to the database immediately and then refresh the view (as the data is date sorted). Any thoughts on this? Alternatively, I have seen some jQuery plug-ins that emulate a datagrid and provide associated functionality. My first thoughts are that I do not need all the functionality that comes with these plug-ins (e.g. zebra striping, ability to sort by column using sort glyph in column headers etc .) and I don’t really see any benefit to this over and above the solution I have outlined above. Again, is there reason to reconsider this view? Finally, when a user edits a date , I will need to refresh the view. In order to do this I had been reading about Html.RenderAction and this seemed like it may be a better option than using Partial Views as I can incorporate application logic into the action method. Am I right to consider Html.RenderAction or have I misunderstood its usage? Hope this post is clear and not too long. I did consider separate posts for each topic (e.g. Partial View vs. Html.RenderAction, when to use jQury datagrid plug-in) but it feels like these issues are so intertwined that they need to be dealt with in contect of each other. Thanks

    Read the article

  • C# Begin/EndReceive - how do I read large data?

    - by ryeguy
    When reading data in chunks of say, 1024, how do I continue to read from a socket that receives a message bigger than 1024 bytes until there is no data left? Should I just use BeginReceive to read a packet's length prefix only, and then once that is retrieved, use Receive() (in the async thread) to read the rest of the packet? Or is there another way? edit: I thought Jon Skeet's link had the solution, but there is a bit of a speedbump with that code. The code I used is: public class StateObject { public Socket workSocket = null; public const int BUFFER_SIZE = 1024; public byte[] buffer = new byte[BUFFER_SIZE]; public StringBuilder sb = new StringBuilder(); } public static void Read_Callback(IAsyncResult ar) { StateObject so = (StateObject) ar.AsyncState; Socket s = so.workSocket; int read = s.EndReceive(ar); if (read > 0) { so.sb.Append(Encoding.ASCII.GetString(so.buffer, 0, read)); if (read == StateObject.BUFFER_SIZE) { s.BeginReceive(so.buffer, 0, StateObject.BUFFER_SIZE, 0, new AyncCallback(Async_Send_Receive.Read_Callback), so); return; } } if (so.sb.Length > 0) { //All of the data has been read, so displays it to the console string strContent; strContent = so.sb.ToString(); Console.WriteLine(String.Format("Read {0} byte from socket" + "data = {1} ", strContent.Length, strContent)); } s.Close(); } Now this corrected works fine most of the time, but it fails when the packet's size is a multiple of the buffer. The reason for this is if the buffer gets filled on a read it is assumed there is more data; but the same problem happens as before. A 2 byte buffer, for exmaple, gets filled twice on a 4 byte packet, and assumes there is more data. It then blocks because there is nothing left to read. The problem is that the receive function doesn't know when the end of the packet is. This got me thinking to two possible solutions: I could either have an end-of-packet delimiter or I could read the packet header to find the length and then receive exactly that amount (as I originally suggested). There's problems with each of these, though. I don't like the idea of using a delimiter, as a user could somehow work that into a packet in an input string from the app and screw it up. It also just seems kinda sloppy to me. The length header sounds ok, but I'm planning on using protocol buffers - I don't know the format of the data. Is there a length header? How many bytes is it? Would this be something I implement myself? Etc.. What should I do?

    Read the article

  • Convert a PDF to a Transparent PNG with GhostScript

    - by Jonathon Wolfe
    Hi all. I am attempting, unsuccessfully, to use Ghostscript to rasterize PDF files with a transparent background to PNG files with a transparent background. I've searched high and low for questions from others attempting the same thing and none of the posted solutions, which as far as I can tell come down to specifying -sDEVICE=pngalpha, have worked with my test files. At this point I would really appreciate any advice or tips a more experienced hand could provide. My test PDF is located here: http://www.kolossus.com/files/test.pdf It could be that the issue is with this file, but I doubt it. As far as I can tell, it has no specified background, and when I open the file with a transparency-aware app like Photoshop or Illustrator, sure enough it displays with a transparent background. However, when opened with an application like Adobe Reader the file is rendered with a white background. I believe that this has more to do with the application rendering the PDF than with the PDF itself -- apps like Adobe Reader assume you want to see what a printed document will look like and therefore always show a white canvas behind the artwork -- but I can't be sure. The gs command I'm using is: gs -dNOPAUSE -dBATCH -sDEVICE=pngalpha -r72 -sOutputFile=test.png test.pdf This produces a PNG that has transparent pixels outside of the bounding box of the artwork in the file, but all pixels that are inside the artwork's bounding box are rasterized against a white background. This is a problem for me, as my artwork has drop shadows and antialiased edges that need to be preserved in the final output, and can't just be postprocessed out with ImageMagick. A sample of my PNG output is at the same location as the pdf above, with .png at the end (stackoverflow won't let me include more than one url in my post). Interestingly, I see no effects from using the -dBackgroundColor flag, even if I set it to something non-white like -dBackgroundColor=16#ff0000. Perhaps my understanding of the syntax of this flag is wrong. Also interestingly, I see no effects from using the -dTextAlphaBits=4 -dGraphicsAlphaBits=4 flags to try to enable subpixel antialiasing. I would also appreciate any advice on how to enable subpixel antialiasing, especially on text. Finally, I'm using GPL Ghostscript 8.64 on Mac OS 10.5.7, and the rendering workflow I'm trying to get set up is to generate transparent PNG images from PDFs output by PrinceXML. I'm calling Ghostscript directly for the rasterization instead of using ImageMagick because ImageMagick delegates to Ghostscript for PDF rasterization and I should be able to control the rasterization better by calling GS directly. Thanks for your help. -Jon Wolfe

    Read the article

  • How is IObservable<double>.Average supposed to work?

    - by Dan Tao
    Update Looks like Jon Skeet was right (big surprise!) and the issue was with my assumption about the Average extension providing a continuous average (it doesn't). For the behavior I'm after, I wrote a simple ContinuousAverage extension method, the implementation of which I am including here for the benefit of others who may want something similar: public static class ObservableExtensions { private class ContinuousAverager { private double _mean; private long _count; public ContinuousAverager() { _mean = 0.0; _count = 0L; } // undecided whether this method needs to be made thread-safe or not // seems that ought to be the responsibility of the IObservable (?) public double Add(double value) { double delta = value - _mean; _mean += (delta / (double)(++_count)); return _mean; } } public static IObservable<double> ContinousAverage(this IObservable<double> source) { var averager = new ContinuousAverager(); return source.Select(x => averager.Add(x)); } } I'm thinking of going ahead and doing something like the above for the other obvious candidates as well -- so, ContinuousCount, ContinuousSum, ContinuousMin, ContinuousMax ... perhaps ContinuousVariance and ContinuousStandardDeviation as well? Any thoughts on that? Original Question I use Rx Extensions a little bit here and there, and feel I've got the basic ideas down. Now here's something odd: I was under the impression that if I wrote this: var ticks = Observable.FromEvent<QuoteEventArgs>(MarketDataProvider, "MarketTick"); var bids = ticks .Where(e => e.EventArgs.Quote.HasBid) .Select(e => e.EventArgs.Quote.Bid); var bidsSubscription = bids.Subscribe( b => Console.WriteLine("Bid: {0}", b) ); var avgOfBids = bids.Average(); var avgOfBidsSubscription = avgOfBids.Subscribe( b => Console.WriteLine("Avg Bid: {0}", b) ); I would get two IObservable<double> objects (bids and avgOfBids); one would basically be a stream of all the market bids from my MarketDataProvider, the other would be a stream of the average of these bids. So something like this: Bid Avg Bid 1 1 2 1.5 1 1.33 2 1.5 It seems that my avgOfBids object isn't doing anything. What am I missing? I think I've probably misunderstood what Average is actually supposed to do. (This also seems to be the case for all of the aggregate-like extension methods on IObservable<T> -- e.g., Max, Count, etc.)

    Read the article

  • Call HttpWebRequest in another thread as UI with Task class - avoid to dispose object created in Task scope

    - by John
    I would like call HttpWebRequest on another thread as UI, because I must make 200 request or server and downloaded image. My scenation is that I make a request on server, create image and return image. This I make in another thread. I use Task class, but it call automaticaly Dispose method on all object created in task scope. So I return null object from this method. public BitmapImage CreateAvatar(Uri imageUri, int sex) { if (imageUri == null) return CreateDefaultAvatar(sex); BitmapImage image = null; new Task(() => { var request = WebRequest.Create(imageUri); var response = request.GetResponse(); using (var stream = response.GetResponseStream()) { Byte[] buffer = new Byte[response.ContentLength]; int offset = 0, actuallyRead = 0; do { actuallyRead = stream.Read(buffer, offset, buffer.Length - offset); offset += actuallyRead; } while (actuallyRead > 0); image = new BitmapImage { CreateOptions = BitmapCreateOptions.None, CacheOption = BitmapCacheOption.OnLoad }; image.BeginInit(); image.StreamSource = new MemoryStream(buffer); image.EndInit(); image.Freeze(); } }).Start(); return image; } How avoid it? Thank Mr. Jon Skeet try this: private Stream GetImageStream(Uri imageUri) { Byte[] buffer = null; //new Task(() => //{ var request = WebRequest.Create(imageUri); var response = request.GetResponse(); using (var stream = response.GetResponseStream()) { buffer= new Byte[response.ContentLength]; int offset = 0, actuallyRead = 0; do { actuallyRead = stream.Read(buffer, offset, buffer.Length - offset); offset += actuallyRead; } while (actuallyRead > 0); } //}).Start(); return new MemoryStream(buffer); } It return object which is null a than try this: private Stream GetImageStream(Uri imageUri) { Byte[] buffer = null; new Task(() => { var request = WebRequest.Create(imageUri); var response = request.GetResponse(); using (var stream = response.GetResponseStream()) { buffer= new Byte[response.ContentLength]; int offset = 0, actuallyRead = 0; do { actuallyRead = stream.Read(buffer, offset, buffer.Length - offset); offset += actuallyRead; } while (actuallyRead > 0); } }).Start(); return new MemoryStream(buffer); } Method above return null

    Read the article

  • Delegates in Action -Help

    - by Amutha
    I am learning delegates.I am very curious to apply delegates to the following chain-of-responsibility pattern. Kindly help me the way to apply delegates to the following piece. Thanks in advance.Thanks for your effort. #region Chain of Responsibility Pattern namespace Chain { public class Player { public string Name { get; set; } public int Score { get; set; } } public abstract class PlayerHandler { protected PlayerHandler _Successor = null; public abstract void HandlePlayer(Player _player); public void SetupHandler(PlayerHandler _handler) { _Successor = _handler; } } public class Employee : PlayerHandler { public override void HandlePlayer(Player _player) { if (_player.Score <= 100) { MessageBox.Show(string.Format("{0} is greeted by Employee", _player.Name)); } else { _Successor.HandlePlayer(_player); } } } public class Supervisor : PlayerHandler { public override void HandlePlayer(Player _player) { if (_player.Score >100 && _player.Score<=200) { MessageBox.Show(string.Format("{0} is greeted by Supervisor", _player.Name)); } else { _Successor.HandlePlayer(_player); } } } public class Manager : PlayerHandler { public override void HandlePlayer(Player _player) { if (_player.Score > 200) { MessageBox.Show(string.Format("{0} is greeted by Manager", _player.Name)); } else { MessageBox.Show(string.Format("{0} got low score", _player.Name)); } } } } #endregion #region Main() void Main() { Chain.Player p1 = new Chain.Player(); p1.Name = "Jon"; p1.Score = 100; Chain.Player p2 = new Chain.Player(); p2.Name = "William"; p2.Score = 170; Chain.Player p3 = new Chain.Player(); p3.Name = "Robert"; p3.Score = 300; Chain.Employee emp = new Chain.Employee(); Chain.Manager mgr = new Chain.Manager(); Chain.Supervisor sup = new Chain.Supervisor(); emp.SetupHandler(sup); sup.SetupHandler(mgr); emp.HandlePlayer(p1); emp.HandlePlayer(p2); emp.HandlePlayer(p3); } #endregion

    Read the article

  • Is this a valid pattern for raising events in C#?

    - by Will Vousden
    Update: For the benefit of anyone reading this, since .NET 4, the lock is unnecessary due to changes in synchronization of auto-generated events, so I just use this now: public static void Raise<T>(this EventHandler<T> handler, object sender, T e) where T : EventArgs { if (handler != null) { handlerCopy(sender, e); } } And to raise it: SomeEvent.Raise(this, new FooEventArgs()); Having been reading one of Jon Skeet's articles on multithreading, I've tried to encapsulate the approach he advocates to raising an event in an extension method like so (with a similar generic version): public static void Raise(this EventHandler handler, object @lock, object sender, EventArgs e) { EventHandler handlerCopy; lock (@lock) { handlerCopy = handler; } if (handlerCopy != null) { handlerCopy(sender, e); } } This can then be called like so: protected virtual void OnSomeEvent(EventArgs e) { this.someEvent.Raise(this.eventLock, this, e); } Are there any problems with doing this? Also, I'm a little confused about the necessity of the lock in the first place. As I understand it, the delegate is copied in the example in the article to avoid the possibility of it changing (and becoming null) between the null check and the delegate call. However, I was under the impression that access/assignment of this kind is atomic, so why is the lock necessary? Update: With regards to Mark Simpson's comment below, I threw together a test: static class Program { private static Action foo; private static Action bar; private static Action test; static void Main(string[] args) { foo = () => Console.WriteLine("Foo"); bar = () => Console.WriteLine("Bar"); test += foo; test += bar; test.Test(); Console.ReadKey(true); } public static void Test(this Action action) { action(); test -= foo; Console.WriteLine(); action(); } } This outputs: Foo Bar Foo Bar This illustrates that the delegate parameter to the method (action) does not mirror the argument that was passed into it (test), which is kind of expected, I guess. My question is will this affect the validity of the lock in the context of my Raise extension method? Update: Here is the code I'm now using. It's not quite as elegant as I'd have liked, but it seems to work: public static void Raise<T>(this object sender, ref EventHandler<T> handler, object eventLock, T e) where T : EventArgs { EventHandler<T> copy; lock (eventLock) { copy = handler; } if (copy != null) { copy(sender, e); } }

    Read the article

  • Top tweets SOA Partner Community – October 2012

    - by JuergenKress
    Send your tweets @soacommunity #soacommunity and follow us at http://twitter.com/soacommunity SOA Community Deploying Fusion Order Demo on 11.1.1.6 by Antony Reynolds http://wp.me/p10C8u-vA leonsmiers ?Cant wait to test it >> 't waiRT @OracleSOA: Case Management patterns, session coverage from #OOW #OracleBPM #ACM #BPM http://bit.ly/OdcZL6 Danilo Schmiedel Bye bye San Francisco. #oow was a great conference in a wonderful city! Thanks! @soacommunity pic.twitter.com/lcYSe9xC OPITZ CONSULTING ?The Journey towards #Oracle #BPM @OpenWorld 2012 - Slides by @t_winterberg & H. Normann: http://ow.ly/edkWE #oow demed Full house at the SOA Customer Advisory Board! #oow12 http://instagr.am/p/QX9B8eLMLS/ Danilo Schmiedel "@whitehorsesnl: Had some great talks with the BPM guys at the DEMOgrounds. It is one of the best things at #oow" -> I agree!! @soacommunity Mark Simpson ?Fusion Middleware Global Innovation Awards: nice to pick up a soa and bpm with our customer. #oow Mark Simpson ?RT @SOASimone: #oraclesoa #oow hands on lab fully booked pic.twitter.com/pwI94Ew7 <--quick, provision some more compute power on the cloud! Oracle SOA ?Join us for BPM and Analytics: Process Dashboards. BAM, and Intelligent OptimizationMoscone South - 308#OracleBPM #OOW Oracle SOA ?Real-time public safety demo! License plate recognition and processing in London via Oracle Event Processing. #oow pic.twitter.com/WufesDBq Marc ?Nice session on customer success stories on #SOA11g on with @SOASimone Pro and cons and architectural overview. #oow pic.twitter.com/bzuhsujm Lucas Jellema Full length Keynote on Middleware #oow : http://medianetwork.oracle.com/video/player/1873556035001 … #oow_amis OracleBlogs ?Why Fusion Middleware matters to Oracle Applications and Fusion Applications customers? http://ow.ly/2stVQ0 OracleBlogs ?Open World Session - BPM, SOA and ADF Combined:Patterns learned from Fusion Applications http://ow.ly/2suhzf Ronald Luttikhuizen ?VENNSTER BLOG | Presentations at OpenWorld 2012 | http://blog.vennster.nl/2012/10/presentations-at-openworld-2012.html … Andrejus Baranovskis @dschmied @soacommunity next OOW for sure, and may be SOA community event ! @soacommunity Danilo Schmiedel ?@andrejusb Thanks Andrejus - I really enjoyed having a session with you at #oow. When is next time :-) ? @soacommunity Lionel Dubreuil ?@soacommunity #oow12 Today-1:15pm-Marriott Marquis Salon 7 Jump-starting Integration with Oracle Foundation Pack http://bit.ly/QKKJzF Ronald Luttikhuizen ?Impression from our fault handling session in OSB and SOA Suite from the audience @soacommunity @gschmutz #oow pic.twitter.com/WSg1Z89E Marc Nice session on Oracle Virtual Assembly for #SOA11g, @soacommunity Works with #exalogic but not required SOA Community ?Send your #soacommunity #oow pictures and blog posts @soacommunity or http://www.facebook.com/soacommunity Enjoy OOW ;-) Jon petter hjulstad Oracle BPM- Big leap forward in 11.1.1.7 ! Whitehorses ?Common BPM Use Cases from Oracle #bpm #oow pic.twitter.com/ofOv04EF Whitehorses ?Oracle BPM 11.1.1.7 top new features. Interesting #oow #oowbenelux pic.twitter.com/HY9QN5un SOA Community Industrialized SOA - topic of Business Technology Magazine http://wp.me/p10C8u-vi orclateamsoa ?A-Team Blog #ateam: The curious case of SOA Human tasks' automatic completion http://ow.ly/1mq6YU Simone Geib Look for this sign #oow #oraclesoa pic.twitter.com/MJsPV4PO Lucas Jellema My summary of Larry Ellison's keynote at #oow on the AMIS Blog: http://technology.amis.nl/2012/10/01/oow-2012-larry-ellisons-keynote-announcements-exa-cloud-database/ … #oow_amis gschmutz ?Join my #oow session "Five Cool Use Cases for the Spring Component" to see the power of Spring and SOA Suite combined! Moscone 310 - 3:15 PM Ronald Luttikhuizen Thanks to @soacommunity for great SOA/BPM dinner event yesterday night! #oow pic.twitter.com/v7x3i0DC OracleBlogs ?OSB, Service Callouts and OQL http://ow.ly/2sq6B2 OracleBlogs ?Cloud and On-Premises Applications Integration using Oracle Integration Adapters http://ow.ly/2sqiDy OracleBlogs ?Adapters, SOA Suite and More @Openworld 2012 http://ow.ly/2srdTg Eric Elzinga ?OSB, Service Callouts and OQL - Part 3, http://see.sc/JodzEx #oracleservicebus Donatas Valys interesting articles about soa industrialization to read #soa #industrialization http://it-republik.de/business-technology/bt-magazin-ausgaben/Industrialized-SOA-000516.html … gschmutz ?“@techsymp: 2012 Symposium Presentation Download Page Now Available! 75% of presentations published. http://www.servicetechsymposium.com ” find mine there.. Oracle BPM Customer Experience and BPM – From Efficiency to Engagement #bpm #oraclebpm #processmanagement #socialbpm http://pub.vitrue.com/Tahi SOA Community ?@soacommunity SOA Community Newsletter September 2012 http://wp.me/p10C8u-wa SOA Community again again again.... it is Oracle Open World 2012 http://wp.me/p10C8u-wk OracleBlogs ?SOA Proactive support http://ow.ly/2smrSJ demed ?@gschmutz on NoSQL at @techsymp http://lockerz.com/s/247601661 demed ?Just finished "#BigData and its impact on #SOA" talk @techsymp. Really enjoyed getting out of beaten path. #london #oep http://lockerz.com/s/247636974 OTNArchBeat ?Need help selling SOA to business stakeholders? Give them this free eBook. #soasuite http://pub.vitrue.com/hsQY SOA Community top Tweets SOA Partner Community &ndash; September 2012 http://wp.me/p10C8u-vc SOA Community Move Data into the grid for scalable, predictable response times http://wp.me/p10C8u-vv ServiceTechSymposium ?The September issue of the Service Technology Magazine is now published with six new items! Read them at http://www.servicetechmag.com Marc ?Reviewed @Packt_OracleFMW new book on SOA11g administration! Very good ! http://tinyurl.com/8pzd5ww SOA Community ?BPM Solution Catalogue&ndash;promote your process templates http://wp.me/p10C8u-vt OTNArchBeat ?BPM ADF Task forms: Checking whether the current user is in a BPM Swimlane | @ChrisKarlChan http://pub.vitrue.com/aPMG OTNArchBeat ?Cloud, automation drive new growth in SOA governance market | @JoeMcKendrick http://pub.vitrue.com/hNPv Simon Haslam ?Looking for "oak style"(!) advanced content but you're a middleware specialist? See #ukoug2012 #middlewaresunday http://2012.ukoug.org/default.asp?p=9355 … Simon Haslam ?The #ukoug2012 agenda is "go, go, go!" (as Murray would say!) http://2012.ukoug.org/agendagrid Germán Gazzoni SOA Spezial II verfügbar – Industralized SOA: Die überarbeitete und ergänzte Neuauflage des SOA Spezial Sonderhe... http://bit.ly/PAWwN9 Oracle SOA ?Flip thru new interactive "Oracle SOA Suite eBook-In the Customers Words" #middleware #soa #oraclesoa http://pub.vitrue.com/NzFZ SOA Community Follow SOA Community on Facebook http://www.facebook.com/soacommunity #soacommunity #opn SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: SOA Community twitter,SOA Community,Oracle SOA,Oracle BPM,BPM Community,OPN,Jürgen Kress

    Read the article

  • Finding a person in the forest

    - by PointsToShare
    © 2011 By: Dov Trietsch. All rights reserved finding a person in the forest or Limiting the AD result in SharePoint People Picker There are times when we need to limit the SharePoint audience of certain farms or servers or site collections to a particular audience. One of my experiences involved limiting access to US citizens, another to a particular location. Now, most of us – your humble servant included – are not Active Directory experts – but we must be able to handle the “audience restrictions” as required. So here is how it’s done in a nutshell. Important note. Not all could be done in PowerShell (at least not yet)! There are no Windows PowerShell commands to configure People Picker. The stsadm command is: stsadm -o setproperty -pn peoplepicker-searchadcustomquery -pv ADQuery –url http://somethingOrOther Note the long-hyphenated property name. Now to filling the ADQuery.   LDAP Query in a nutshell Syntax LDAP is no older than SQL and an LDAP query is actually a query against the LDAP Database. LDAP attributes are the equivalent of Database columns, so why do we have to learn a new query language? Beats me! But we must, so here it is. The syntax of an LDAP query string is made of individual statements with relational operators including: = Equal <= Lower than or equal >= Greater than or equal… and memberOf – a group membership. ! Not * Wildcard Equal and memberOf are the most commonly used. Checking for absence uses the ! – not and the * - wildcard Example: (SN=Grant) All whose last name – SurName – is Grant Example: (!(SN=Grant)) All except Grant Example: (!(SN=*)) all where there is no SurName i.e SurName is absent (probably Rappers). Example: (CN=MyGroup) Common Name is MyGroup.  Example: (GN=J*) all the Given Names that start with J (JJ, Jane, Jon, John, etc.) The cryptic SN, CN, GN, etc. are attributes and more about them later All the queries are enclosed in parentheses (Query). Complex queries are comprised of sets that are in AND or OR conditions. AND is denoted by the ampersand (&) and the OR is denoted by the vertical pipe (|). The general syntax is that of the Prefix polish notation where the operand precedes the variables. E.g +ab is the sum of a and b. In an LDAP query (&(A)(B)) will garner the objects for which both A and B are true. In an LDAP query (&(A)(B)(C)) will garner the objects for which A, B and C are true. There’s no limit to the number of conditions. In an LDAP query (|(A)(B)) will garner the objects for which either A or B are true. In an LDAP query (|(A)(B)(C)) will garner the objects for which at least one of A, B and C is true. There’s no limit to the number of conditions. More complex queries have both types of conditions and the parentheses determine the order of operations. Attributes Now let’s get into the SN, CN, GN, and other attributes of the query SN – is the SurName (last name) GN – is the Given Name (first name) CN – is the Common Name, usually GN followed by SN OU – is an Organization Unit such as division, department etc. DC – is a Domain Content in the AD forest l – lower case ‘L’ stands for location. Jerusalem anybody? Or Katmandu. UPN – User Principal Name, is usually the first part of an email address. By nature it is unique in the forest. Most systems set the UPN to be the first initial followed by the SN of the person involved. Some limit the total to 8 characters. If we have many ‘jsmith’ we have to somehow distinguish them from each other. DN – is the distinguished name – a name unique to AD forest in which it lives. Usually it’s a CN with some domain or group distinguishers. DN is important in conjunction with the memberOf relation. Groups have stricter requirement. Each group has to have a unique name - its CN and it has to be unique regardless of its place. See more below. All of the attributes are case insensitive. CN, cn, Cn, and cN are identical. objectCategory is an element that requires special consideration. AD contains many different object like computers, printers, and of course people and groups. In the queries below, we’re limiting our search to people (person). Putting it altogether Let’s get a list of all the Johns in the SPAdmin group of the Jerusalem that local domain. (&(objectCategory=person)(memberOf=cn=SPAdmin,ou=Jerusalem,dc=local)) The memberOf=cn=SPAdmin uses the cn (Common Name) of the SPAdmin group. This is how the memberOf relation is used. ‘SPAdmin’ is actually the DN of the group. Also the memberOf relation does not allow wild cards (*) in the group name. Also, you are limited to at most one ‘OU’ entry. Let’s add Marvin Minsky to the search above. |(&(objectCategory=person)(memberOf=cn=SPAdmin,ou=Jerusalem,dc=local))(CN=Marvin Minsky) Here I added the or pipeline at the beginning of the query and put the CN requirement for Minsky at the end. Note that if Marvin was already in the prior result, he’s not going to be listed twice. One last note: You may see a dryer but more complete list of attributes rules and examples in: http://www.tek-tips.com/faqs.cfm?fid=5667 And finally (thus negating the claim that my previous note was last), to the best of my knowledge there are 3 more ways to limit the audience. One is to use the peoplepicker-searchadcustomfilter property using the same ADQuery. This works only in SP1 and above. The second is to limit the search to users within this particular site collection – the property name is peoplepicker-onlysearchwithinsitecollection and the value is yes (-pv yes) And the third is –pn peoplepicker-serviceaccountdirectorypaths –pv “OU=ou1,DC=dc1…..” Again you are limited to at most one ‘OU’ phrase – no OU=ou1,OU=ou2… And now the real end. The main property discussed in this sprawling and seemingly endless monogram – peoplepicker-searchadcustomquery - is the most general way of getting the job done. Here are a few examples of command lines that worked and some that didn’t. Can you see why? C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\12\BIN>stsa dm -o setproperty -url http://somethingOrOther -pn peoplepicker-searchadcustomfi lter -pv (Title=David) Operation completed successfully. C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\12\BIN>stsa dm -o setproperty -url http://somethingOrOther -pn peoplepicker-searchadcustomfi lter -pv (!Title=David) Operation completed successfully. C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\12\BIN>stsa dm -o setproperty -url http://somethingOrOther -pn peoplepicker-searchadcustomfi lter -pv (OU=OURealName,OU=OUMid,OU=OUTop,DC=TopDC,DC=MidDC,DC=BottomDC) Command line error. Too many OUs C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\12\BIN>stsa dm -o setproperty -url http://somethingOrOther -pn peoplepicker-searchadcustomfi lter -pv (OU=OURealName) Operation completed successfully. C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\12\BIN>stsa dm -o setproperty -url http://somethingOrOther -pn peoplepicker-searchadcustomfi lter -pv (DC=TopDC,DC=MidDC,DC=BottomDC) Operation completed successfully. C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\12\BIN>stsa dm -o setproperty -url http://somethingOrOther -pn peoplepicker-searchadcustomfi lter -pv (OU=OURealName,DC=TopDC,DC=MidDC,DC=BottomDC) Operation completed successfully.   That’s all folks!

    Read the article

  • Stuck removing ffmpeg from system due to package problems

    - by Michiel
    I'd be very grateful to get a bit of support on the following situation: Using Ubuntu 12.04, and I used the PPA of Jon Severinson for ffmpeg. I had a problem playing movies with mplayer, so I decided to remove the PPA and try to use libav only after I read this article about libav vs ffmpeg. I first removed the PPA from the list, then apt-get update, etc. But I couldn't remove ffmpeg due to dependency-errors and got stuck in what seemed some kind of loop. Then I found a suggestion here. I followed the steps, and ended up force-removing libavcodec-extra-53. Because, apparently that was "what got stuff moving again". At the moment I can't. Now Ubuntu is reporting a broken package (BrokenCount 0). Errors: De afhankelijkheden van de volgende pakketten konden niet geïnstalleerd worden: audacious-plugins: Depends: audacious-plugins-data (= 3.2.1-4ubuntu1) maar 3.2.1-4ubuntu1 is geïnstalleerd Depends: libavcodec-extra-53 (= 4:0.8-1~) maar het is niet geïnstalleerd Depends: libavformat-extra-53 (= 4:0.8-1~) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd Depends: libavutil-extra-51 (= 4:0.8-1~) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd Depends: libcurl3-gnutls (= 7.16.2-1) maar 7.22.0-3ubuntu4 is geïnstalleerd Depends: libgcc1 (= 1:4.1.1) maar 1:4.6.3-1ubuntu5 is geïnstalleerd Depends: libpulse0 (= 1:0.99.1) maar 1:1.1-0ubuntu15.1 is geïnstalleerd Depends: libstdc++6 (= 4.6) maar 4.6.3-1ubuntu5 is geïnstalleerd Depends: libwavpack1 (= 4.40.0) maar 4.60.1-2 is geïnstalleerd Depends: libxcomposite1 (= 1:0.3-1) maar 1:0.4.3-2build1 is geïnstalleerd Depends: zlib1g (= 1:1.1.4) maar 1:1.2.3.4.dfsg-3ubuntu4 is geïnstalleerd libavfilter-extra-2: Depends: libavcodec-extra-53 (= 6:0.10.4.0ubuntu0jon2.2) maar het is niet geïnstalleerd Depends: libavcodec-extra-53 (< 6:0.10.4.0ubuntu0jon2.2-99) maar het is niet geïnstalleerd Depends: libavformat-extra-53 (= 6:0.10.4.0ubuntu0jon2.2) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd Depends: libavutil-extra-51 (= 6:0.10.4.0ubuntu0jon2.2) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd Depends: libswresample-extra-0 (= 6:0.10.4.0ubuntu0jon2.2) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd Depends: libswscale-extra-2 (= 6:0.10.4.0ubuntu0jon2.2) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd libavformat-extra-53: Depends: libavcodec-extra-53 (= 6:0.10.4.0ubuntu0jon2.2) maar het is niet geïnstalleerd Depends: libavcodec-extra-53 (< 6:0.10.4.0ubuntu0jon2.2-99) maar het is niet geïnstalleerd Depends: libavutil-extra-51 (= 6:0.10.4.0ubuntu0jon2.2) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd libk3b6-extracodecs: Depends: libavcodec-extra-53 (= 4:0.8-1~) maar het is niet geïnstalleerd Depends: libavformat-extra-53 (= 4:0.8-1~) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd Depends: libavutil-extra-51 (= 4:0.8-1~) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd Depends: libc6 (= 2.14) maar 2.15-0ubuntu10 is geïnstalleerd Depends: libkdecore5 (= 4:4.4.4) maar 4:4.8.4a-0ubuntu0.2 is geïnstalleerd Depends: libkio5 (= 4:4.4.4) maar 4:4.8.4a-0ubuntu0.2 is geïnstalleerd Depends: libqtcore4 (= 4:4.7.0~beta1) maar 4:4.8.1-0ubuntu4.2 is geïnstalleerd Depends: libstdc++6 (= 4.1.1) maar 4.6.3-1ubuntu5 is geïnstalleerd libquicktime2: Depends: libavcodec-extra-53 (= 4:0.8~beta2-2) maar het is niet geïnstalleerd Depends: libswscale-extra-2 (= 4:0.8~beta2-2) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd libxine1-ffmpeg: Depends: libavcodec-extra-53 (= 4:0.7.3-1) maar het is niet geïnstalleerd Depends: libavutil-extra-51 (= 4:0.7.3-1) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd Depends: libc6 (= 2.4) maar 2.15-0ubuntu10 is geïnstalleerd Depends: libpostproc-extra-52 (= 4:0.7.3-1) maar het is niet geïnstalleerd Depends: libxine1-bin (= 1.1.20-2build1) maar 1.1.20-2build1 is geïnstalleerd mencoder: Depends: libavcodec-extra-53 (= 4:0.8-1~) maar het is niet geïnstalleerd Depends: libavformat-extra-53 (= 4:0.8-1~) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd Depends: libavutil-extra-51 (= 4:0.8-1~) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd Depends: libpostproc-extra-52 (= 4:0.8-1~) maar het is niet geïnstalleerd Depends: libswscale-extra-2 (= 4:0.8-1~) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd mplayer: Depends: libavcodec-extra-53 (= 4:0.8-1~) maar het is niet geïnstalleerd Depends: libavformat-extra-53 (= 4:0.8-1~) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd Depends: libavutil-extra-51 (= 4:0.8-1~) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd Depends: libpostproc-extra-52 (= 4:0.8-1~) maar het is niet geïnstalleerd Depends: libswscale-extra-2 (= 4:0.8-1~) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd vlc: Depends: vlc-nox (= 2.0.3-0ubuntu0.12.04.1) maar 2.0.3-0ubuntu0.12.04.1 is geïnstalleerd Depends: libavcodec-extra-53 (= 4:0.8-1~) maar het is niet geïnstalleerd Depends: libavutil-extra-51 (= 4:0.8-1~) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd Depends: libc6 (= 2.15) maar 2.15-0ubuntu10 is geïnstalleerd Depends: libfreetype6 (= 2.2.1) maar 2.4.8-1ubuntu2 is geïnstalleerd Depends: libgcc1 (= 1:4.1.1) maar 1:4.6.3-1ubuntu5 is geïnstalleerd Depends: libqtcore4 (= 4:4.8.0) maar 4:4.8.1-0ubuntu4.2 is geïnstalleerd Depends: libqtgui4 (= 4:4.7.0~beta1) maar 4:4.8.1-0ubuntu4.2 is geïnstalleerd Depends: libstdc++6 (= 4.6) maar 4.6.3-1ubuntu5 is geïnstalleerd Depends: zlib1g (= 1:1.2.3.3.dfsg) maar 1:1.2.3.4.dfsg-3ubuntu4 is geïnstalleerd vlc-nox: Depends: libavcodec-extra-53 (= 4:0.8-1~) maar het is niet geïnstalleerd Depends: libavformat-extra-53 (= 4:0.8-1~) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd Depends: libavutil-extra-51 (= 4:0.8-1~) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd Depends: libc6 (= 2.15) maar 2.15-0ubuntu10 is geïnstalleerd Depends: libfontconfig1 (= 2.8.0) maar 2.8.0-3ubuntu9.1 is geïnstalleerd Depends: libfreetype6 (= 2.2.1) maar 2.4.8-1ubuntu2 is geïnstalleerd Depends: libgcc1 (= 1:4.1.1) maar 1:4.6.3-1ubuntu5 is geïnstalleerd Depends: libgnutls26 (= 2.12.6.1-0) maar 2.12.14-5ubuntu3.1 is geïnstalleerd Depends: libmpcdec6 (= 1:0.1~r435) maar 2:0.1~r459-1ubuntu1 is geïnstalleerd Depends: libncursesw5 (= 5.6+20070908) maar 5.9-4 is geïnstalleerd Depends: libpostproc-extra-52 (= 4:0.8-1~) maar het is niet geïnstalleerd Depends: libsmbclient (= 3.0.24) maar 2:3.6.3-2ubuntu2.3 is geïnstalleerd Depends: libstdc++6 (= 4.6) maar 4.6.3-1ubuntu5 is geïnstalleerd Depends: libswscale-extra-2 (= 4:0.8-1~) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd Depends: libudev0 (= 147) maar 175-0ubuntu9.1 is geïnstalleerd Depends: libxml2 (= 2.7.4) maar 2.7.8.dfsg-5.1ubuntu4.1 is geïnstalleerd Depends: zlib1g (= 1:1.2.0.2) maar 1:1.2.3.4.dfsg-3ubuntu4 is geïnstalleerd xbmc-bin: Depends: libavcodec-extra-53 (= 4:0.8-1~) maar het is niet geïnstalleerd Depends: libavfilter-extra-2 (= 4:0.8-1~) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd Depends: libavformat-extra-53 (= 4:0.8-1~) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd Depends: libavutil-extra-51 (= 4:0.8-1~) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd Depends: libpostproc-extra-52 (= 4:0.8-1~) maar het is niet geïnstalleerd Depends: libswscale-extra-2 (= 4:0.8-1~) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd And apt-get is reporting: root@LAPTOP:~# apt-get -f install Pakketlijsten worden ingelezen... Klaar Boom van vereisten wordt opgebouwd De status informatie wordt gelezen... Klaar Vereisten worden gecorrigeerd... mislukt. De volgende pakketten hebben niet-voldane vereisten: audacious-plugins : Vereisten: libavcodec53 (= 4:0.8-1~) maar het is niet geïnstalleerd of libavcodec-extra-53 (= 4:0.8-1~) maar het is niet geïnstalleerd libavfilter-extra-2 : Vereisten: libavcodec-extra-53 (= 6:0.10.4.0ubuntu0jon2.2) maar het is niet geïnstalleerd Vereisten: libavcodec-extra-53 (< 6:0.10.4.0ubuntu0jon2.2-99) maar het is niet geïnstalleerd libavformat-extra-53 : Vereisten: libavcodec-extra-53 (= 6:0.10.4.0ubuntu0jon2.2) maar het is niet geïnstalleerd Vereisten: libavcodec-extra-53 (< 6:0.10.4.0ubuntu0jon2.2-99) maar het is niet geïnstalleerd libk3b6-extracodecs : Vereisten: libavcodec53 (= 4:0.8-1~) maar het is niet geïnstalleerd of libavcodec-extra-53 (= 4:0.8-1~) maar het is niet geïnstalleerd libquicktime2 : Vereisten: libavcodec53 (= 4:0.8~beta2-2) maar het is niet geïnstalleerd of libavcodec-extra-53 (= 4:0.8~beta2-2) maar het is niet geïnstalleerd libxine1-ffmpeg : Vereisten: libavcodec53 (= 4:0.7.3-1) maar het is niet geïnstalleerd of libavcodec-extra-53 (= 4:0.7.3-1) maar het is niet geïnstalleerd mencoder : Vereisten: libavcodec53 (= 4:0.8-1~) maar het is niet geïnstalleerd of libavcodec-extra-53 (= 4:0.8-1~) maar het is niet geïnstalleerd mplayer : Vereisten: libavcodec53 (= 4:0.8-1~) maar het is niet geïnstalleerd of libavcodec-extra-53 (= 4:0.8-1~) maar het is niet geïnstalleerd vlc : Vereisten: libavcodec53 (= 4:0.8-1~) maar het is niet geïnstalleerd of libavcodec-extra-53 (= 4:0.8-1~) maar het is niet geïnstalleerd vlc-nox : Vereisten: libavcodec53 (= 4:0.8-1~) maar het is niet geïnstalleerd of libavcodec-extra-53 (= 4:0.8-1~) maar het is niet geïnstalleerd xbmc-bin : Vereisten: libavcodec53 (= 4:0.8-1~) maar het is niet geïnstalleerd of libavcodec-extra-53 (= 4:0.8-1~) maar het is niet geïnstalleerd E: Fout, pkgProblemResolver::Resolve maakte scheidingen aan, dit kan veroorzaakt worden door vastgehouden pakketten. E: Kan vereisten niet corrigeren How to proceed?? Thanks in advance, Michiel

    Read the article

  • ASP.NET MVC 2 Mdel encapsulated within ViewModel Validation

    - by Program.X
    I am trying to get validation to work in ASP.NET MVC 2, but without much success. I have a complex class containing a large number of fields. (Don't ask - this is oneo f those real-world situations best practices can't touch) This would normally be my Model and is a LINQ-to-SQL generated class. Because this is generated code, I have created a MetaData class as per http://davidhayden.com/blog/dave/archive/2009/08/10/AspNetMvc20BuddyClassesMetadataType.aspx. public class ConsultantRegistrationMetadata { [DisplayName("Title")] [Required(ErrorMessage = "Title is required")] [StringLength(10, ErrorMessage = "Title cannot contain more than 10 characters")] string Title { get; set; } [Required(ErrorMessage = "Forename(s) is required")] [StringLength(128, ErrorMessage = "Forename(s) cannot contain more than 128 characters")] [DisplayName("Forename(s)")] string Forenames { get; set; } // ... I've attached this to the partial class of my generated class: [MetadataType(typeof(ConsultantRegistrationMetadata))] public partial class ConsultantRegistration { // ... Because my form is complex, it has a number of dependencies, such as SelectLists, etc. which I have encapsulated in a ViewModel pattern - and included the ConsultantRegistration model as a property: public class ConsultantRegistrationFormViewModel { public Data.ConsultantRegistration ConsultantRegistration { get; private set; } public SelectList Titles { get; private set; } public SelectList Countries { get; private set; } // ... So it is essentially ViewModel=Model My View then has: <p> <%: Html.LabelFor(model => model.ConsultantRegistration.Title) %> <%: Html.DropDownListFor(model => model.ConsultantRegistration.Title, Model.Titles,"(select a Title)") %> <%: Html.ValidationMessage("Title","*") %> </p> <p> <%: Html.LabelFor(model => model.ConsultantRegistration.Forenames) %> <%: Html.TextBoxFor(model => model.ConsultantRegistration.Forenames) %> <%: Html.ValidationMessageFor(model=>model.ConsultantRegistration.Forenames) %> </p> The problem is, the validation attributes on the metadata class are having no effect. I tried doing it via an Interface, but also no effect. I'm beginning to think that the reason is because I am encapsulating my model within a ViewModel. My Controller (Create Action) is as follows: [HttpPost] public ActionResult Create(Data.ConsultantRegistration consultantRegistration) { if (ModelState.IsValid) // this is always true - which is wrong!! { try { consultantRegistration = ConsultantRegistrationRepository.SaveConsultantRegistration(consultantRegistration); return RedirectToAction("Edit", new { id = consultantRegistration.ID, sectionIndex = 2 }); } catch (Exception ex) { ModelState.AddModelError("CreateException",ex); } } return View(new ConsultantRegistrationFormViewModel(consultantRegistration)); } As outlined in the comment, the ModelState.IsValid property always returns true, despite fields with the Validaiton annotations not being valid. (Forenames being a key example). Am I missing something obvious - considering I am an MVC newbie? I'm after the mechanism demoed by Jon Galloway at http://www.asp.net/learn/mvc-videos/video-10082.aspx. (Am aware t is similar to http://stackoverflow.com/questions/1260562/asp-net-mvc-model-viewmodel-validation but that post seems to talk about xVal. I have no idea what that is and suspect it is for MVC 1)

    Read the article

  • ASP.NET MVC 2 Model encapsulated within ViewModel Validation

    - by Program.X
    I am trying to get validation to work in ASP.NET MVC 2, but without much success. I have a complex class containing a large number of fields. (Don't ask - this is oneo f those real-world situations best practices can't touch) This would normally be my Model and is a LINQ-to-SQL generated class. Because this is generated code, I have created a MetaData class as per http://davidhayden.com/blog/dave/archive/2009/08/10/AspNetMvc20BuddyClassesMetadataType.aspx. public class ConsultantRegistrationMetadata { [DisplayName("Title")] [Required(ErrorMessage = "Title is required")] [StringLength(10, ErrorMessage = "Title cannot contain more than 10 characters")] string Title { get; set; } [Required(ErrorMessage = "Forename(s) is required")] [StringLength(128, ErrorMessage = "Forename(s) cannot contain more than 128 characters")] [DisplayName("Forename(s)")] string Forenames { get; set; } // ... I've attached this to the partial class of my generated class: [MetadataType(typeof(ConsultantRegistrationMetadata))] public partial class ConsultantRegistration { // ... Because my form is complex, it has a number of dependencies, such as SelectLists, etc. which I have encapsulated in a ViewModel pattern - and included the ConsultantRegistration model as a property: public class ConsultantRegistrationFormViewModel { public Data.ConsultantRegistration ConsultantRegistration { get; private set; } public SelectList Titles { get; private set; } public SelectList Countries { get; private set; } // ... So it is essentially ViewModel=Model My View then has: <p> <%: Html.LabelFor(model => model.ConsultantRegistration.Title) %> <%: Html.DropDownListFor(model => model.ConsultantRegistration.Title, Model.Titles,"(select a Title)") %> <%: Html.ValidationMessage("Title","*") %> </p> <p> <%: Html.LabelFor(model => model.ConsultantRegistration.Forenames) %> <%: Html.TextBoxFor(model => model.ConsultantRegistration.Forenames) %> <%: Html.ValidationMessageFor(model=>model.ConsultantRegistration.Forenames) %> </p> The problem is, the validation attributes on the metadata class are having no effect. I tried doing it via an Interface, but also no effect. I'm beginning to think that the reason is because I am encapsulating my model within a ViewModel. My Controller (Create Action) is as follows: [HttpPost] public ActionResult Create(Data.ConsultantRegistration consultantRegistration) { if (ModelState.IsValid) // this is always true - which is wrong!! { try { consultantRegistration = ConsultantRegistrationRepository.SaveConsultantRegistration(consultantRegistration); return RedirectToAction("Edit", new { id = consultantRegistration.ID, sectionIndex = 2 }); } catch (Exception ex) { ModelState.AddModelError("CreateException",ex); } } return View(new ConsultantRegistrationFormViewModel(consultantRegistration)); } As outlined in the comment, the ModelState.IsValid property always returns true, despite fields with the Validaiton annotations not being valid. (Forenames being a key example). Am I missing something obvious - considering I am an MVC newbie? I'm after the mechanism demoed by Jon Galloway at http://www.asp.net/learn/mvc-videos/video-10082.aspx. (Am aware t is similar to http://stackoverflow.com/questions/1260562/asp-net-mvc-model-viewmodel-validation but that post seems to talk about xVal. I have no idea what that is and suspect it is for MVC 1)

    Read the article

  • Grafting LINQ onto C# 2 library

    - by P Daddy
    I'm writing a data access layer. It will have C# 2 and C# 3 clients, so I'm compiling against the 2.0 framework. Although encouraging the use of stored procedures, I'm still trying to provide a fairly complete ability to perform ad-hoc queries. I have this working fairly well, already. For the convenience of C# 3 clients, I'm trying to provide as much compatibility with LINQ query syntax as I can. Jon Skeet noticed that LINQ query expressions are duck typed, so I don't have to have an IQueryable and IQueryProvider (or IEnumerable<T>) to use them. I just have to provide methods with the correct signatures. So I got Select, Where, OrderBy, OrderByDescending, ThenBy, and ThenByDescending working. Where I need help are with Join and GroupJoin. I've got them working, but only for one join. A brief compilable example of what I have is this: // .NET 2.0 doesn't define the Func<...> delegates, so let's define some workalikes delegate TResult FakeFunc<T, TResult>(T arg); delegate TResult FakeFunc<T1, T2, TResult>(T1 arg1, T2 arg2); abstract class Projection{ public static Condition operator==(Projection a, Projection b){ return new EqualsCondition(a, b); } public static Condition operator!=(Projection a, Projection b){ throw new NotImplementedException(); } } class ColumnProjection : Projection{ readonly Table table; readonly string columnName; public ColumnProjection(Table table, string columnName){ this.table = table; this.columnName = columnName; } } abstract class Condition{} class EqualsCondition : Condition{ readonly Projection a; readonly Projection b; public EqualsCondition(Projection a, Projection b){ this.a = a; this.b = b; } } class TableView{ readonly Table table; readonly Projection[] projections; public TableView(Table table, Projection[] projections){ this.table = table; this.projections = projections; } } class Table{ public Projection this[string columnName]{ get{return new ColumnProjection(this, columnName);} } public TableView Select(params Projection[] projections){ return new TableView(this, projections); } public TableView Select(FakeFunc<Table, Projection[]> projections){ return new TableView(this, projections(this)); } public Table Join(Table other, Condition condition){ return new JoinedTable(this, other, condition); } public TableView Join(Table inner, FakeFunc<Table, Projection> outerKeySelector, FakeFunc<Table, Projection> innerKeySelector, FakeFunc<Table, Table, Projection[]> resultSelector){ Table join = new JoinedTable(this, inner, new EqualsCondition(outerKeySelector(this), innerKeySelector(inner))); return join.Select(resultSelector(this, inner)); } } class JoinedTable : Table{ readonly Table left; readonly Table right; readonly Condition condition; public JoinedTable(Table left, Table right, Condition condition){ this.left = left; this.right = right; this.condition = condition; } } This allows me to use a fairly decent syntax in C# 2: Table table1 = new Table(); Table table2 = new Table(); TableView result = table1 .Join(table2, table1["ID"] == table2["ID"]) .Select(table1["ID"], table2["Description"]); But an even nicer syntax in C# 3: TableView result = from t1 in table1 join t2 in table2 on t1["ID"] equals t2["ID"] select new[]{t1["ID"], t2["Description"]}; This works well and gives me identical results to the first case. The problem is if I want to join in a third table. TableView result = from t1 in table1 join t2 in table2 on t1["ID"] equals t2["ID"] join t3 in table3 on t1["ID"] equals t3["ID"] select new[]{t1["ID"], t2["Description"], t3["Foo"]}; Now I get an error (Cannot implicitly convert type 'AnonymousType#1' to 'Projection[]'), presumably because the second join is trying to join the third table to an anonymous type containing the first two tables. This anonymous type, of course, doesn't have a Join method. Any hints on how I can do this?

    Read the article

  • Time Warp

    - by Jesse
    It’s no secret that daylight savings time can wreak havoc on systems that rely heavily on dates. The system I work on is centered around recording dates and times, so naturally my co-workers and I have seen our fair share of date-related bugs. From time to time, however, we come across something that we haven’t seen before. A few weeks ago the following error message started showing up in our logs: “The supplied DateTime represents an invalid time. For example, when the clock is adjusted forward, any time in the period that is skipped is invalid.” This seemed very cryptic, especially since it was coming from areas of our application that are typically only concerned with capturing date-only (no explicit time component) from the user, like reports that take a “start date” and “end date” parameter. For these types of parameters we just leave off the time component when capturing the date values, so midnight is used as a “placeholder” time. How is midnight an “invalid time”? Globalization Is Hard Over the last couple of years our software has been rolled out to users in several countries outside of the United States, including Brazil. Brazil begins and ends daylight savings time at midnight on pre-determined days of the year. On October 16, 2011 at midnight many areas in Brazil began observing daylight savings time at which time their clocks were set forward one hour. This means that at the instant it became midnight on October 16, it actually became 1:00 AM, so any time between 12:00 AM and 12:59:59 AM never actually happened. Because we store all date values in the database in UTC, always adjust any “local” dates provided by a user to UTC before using them as filters in a query. The error we saw was thrown by .NET when trying to convert the Brazilian local time of 2011-10-16 12:00 AM to UTC since that local time never actually existed. We hadn’t experienced this same issue with any of our US customers because the daylight savings time changes in the US occur at 2:00 AM which doesn’t conflict with our “placeholder” time of midnight. Detecting Invalid Times In .NET you might use code similar to the following for converting a local time to UTC: var localDate = new DateTime(2011, 10, 16); //2011-10-16 @ midnight const string timeZoneId = "E. South America Standard Time"; //Windows system timezone Id for "Brasilia" timezone. var localTimeZone = TimeZoneInfo.FindSystemTimeZoneById(timeZoneId); var convertedDate = TimeZoneInfo.ConvertTimeToUtc(localDate, localTimeZone); The code above throws the “invalid time” exception referenced above. We could try to detect whether or not the local time is invalid with something like this: var localDate = new DateTime(2011, 10, 16); //2011-10-16 @ midnight const string timeZoneId = "E. South America Standard Time"; //Windows system timezone Id for "Brasilia" timezone. var localTimeZone = TimeZoneInfo.FindSystemTimeZoneById(timeZoneId); if (localTimeZone.IsInvalidTime(localDate)) localDate = localDate.AddHours(1); var convertedDate = TimeZoneInfo.ConvertTimeToUtc(localDate, localTimeZone); This code works in this particular scenario, but it hardly seems robust. It also does nothing to address the issue that can arise when dealing with the ambiguous times that fall around the end of daylight savings. When we roll the clocks back an hour they record the same hour on the same day twice in a row. To continue on with our Brazil example, on February 19, 2012 at 12:00 AM, it will immediately become February 18, 2012 at 11:00 PM all over again. In this scenario, how should we interpret February 18, 2011 11:30 PM? Enter Noda Time I heard about Noda Time, the .NET port of the Java library Joda Time, a little while back and filed it away in the back of my mind under the “sounds-like-it-might-be-useful-someday” category.  Let’s see how we might deal with the issue of invalid and ambiguous local times using Noda Time (note that as of this writing the samples below will only work using the latest code available from the Noda Time repo on Google Code. The NuGet package version 0.1.0 published 2011-08-19 will incorrectly report unambiguous times as being ambiguous) : var localDateTime = new LocalDateTime(2011, 10, 16, 0, 0); const string timeZoneId = "Brazil/East"; var timezone = DateTimeZone.ForId(timeZoneId); var localDateTimeMaping = timezone.MapLocalDateTime(localDateTime); ZonedDateTime unambiguousLocalDateTime; switch (localDateTimeMaping.Type) { case ZoneLocalMapping.ResultType.Unambiguous: unambiguousLocalDateTime = localDateTimeMaping.UnambiguousMapping; break; case ZoneLocalMapping.ResultType.Ambiguous: unambiguousLocalDateTime = localDateTimeMaping.EarlierMapping; break; case ZoneLocalMapping.ResultType.Skipped: unambiguousLocalDateTime = new ZonedDateTime( localDateTimeMaping.ZoneIntervalAfterTransition.Start, timezone); break; default: throw new InvalidOperationException(string.Format("Unexpected mapping result type: {0}", localDateTimeMaping.Type)); } var convertedDateTime = unambiguousLocalDateTime.ToInstant().ToDateTimeUtc(); Let’s break this sample down: I’m using the Noda Time ‘LocalDateTime’ object to represent the local date and time. I’ve provided the year, month, day, hour, and minute (zeros for the hour and minute here represent midnight). You can think of a ‘LocalDateTime’ as an “invalidated” date and time; there is no information available about the time zone that this date and time belong to, so Noda Time can’t make any guarantees about its ambiguity. The ‘timeZoneId’ in this sample is different than the ones above. In order to use the .NET TimeZoneInfo class we need to provide Windows time zone ids. Noda Time expects an Olson (tz / zoneinfo) time zone identifier and does not currently offer any means of mapping the Windows time zones to their Olson counterparts, though project owner Jon Skeet has said that some sort of mapping will be publicly accessible at some point in the future. I’m making use of the Noda Time ‘DateTimeZone.MapLocalDateTime’ method to disambiguate the original local date time value. This method returns an instance of the Noda Time object ‘ZoneLocalMapping’ containing information about the provided local date time maps to the provided time zone.  The disambiguated local date and time value will be stored in the ‘unambiguousLocalDateTime’ variable as an instance of the Noda Time ‘ZonedDateTime’ object. An instance of this object represents a completely unambiguous point in time and is comprised of a local date and time, a time zone, and an offset from UTC. Instances of ZonedDateTime can only be created from within the Noda Time assembly (the constructor is ‘internal’) to ensure to callers that each instance represents an unambiguous point in time. The value of the ‘unambiguousLocalDateTime’ might vary depending upon the ‘ResultType’ returned by the ‘MapLocalDateTime’ method. There are three possible outcomes: If the provided local date time is unambiguous in the provided time zone I can immediately set the ‘unambiguousLocalDateTime’ variable from the ‘Unambiguous Mapping’ property of the mapping returned by the ‘MapLocalDateTime’ method. If the provided local date time is ambiguous in the provided time zone (i.e. it falls in an hour that was repeated when moving clocks backward from Daylight Savings to Standard Time), I can use the ‘EarlierMapping’ property to get the earlier of the two possible local dates to define the unambiguous local date and time that I need. I could have also opted to use the ‘LaterMapping’ property in this case, or even returned an error and asked the user to specify the proper choice. The important thing to note here is that as the programmer I’ve been forced to deal with what appears to be an ambiguous date and time. If the provided local date time represents a skipped time (i.e. it falls in an hour that was skipped when moving clocks forward from Standard Time to Daylight Savings Time),  I have access to the time intervals that fell immediately before and immediately after the point in time that caused my date to be skipped. In this case I have opted to disambiguate my local date and time by moving it forward to the beginning of the interval immediately following the skipped period. Again, I could opt to use the end of the interval immediately preceding the skipped period, or raise an error depending on the needs of the application. The point of this code is to convert a local date and time to a UTC date and time for use in a SQL Server database, so the final ‘convertedDate’  variable (typed as a plain old .NET DateTime) has its value set from a Noda Time ‘Instant’. An 'Instant’ represents a number of ticks since 1970-01-01 at midnight (Unix epoch) and can easily be converted to a .NET DateTime in the UTC time zone using the ‘ToDateTimeUtc()’ method. This sample is admittedly contrived and could certainly use some refactoring, but I think it captures the general approach needed to take a local date and time and convert it to UTC with Noda Time. At first glance it might seem that Noda Time makes this “simple” code more complicated and verbose because it forces you to explicitly deal with the local date disambiguation, but I feel that the length and complexity of the Noda Time sample is proportionate to the complexity of the problem. Using TimeZoneInfo leaves you susceptible to overlooking ambiguous and skipped times that could result in run-time errors or (even worse) run-time data corruption in the form of a local date and time being adjusted to UTC incorrectly. I should point out that this research is my first look at Noda Time and I know that I’ve only scratched the surface of its full capabilities. I also think it’s safe to say that it’s still beta software for the time being so I’m not rushing out to use it production systems just yet, but I will definitely be tinkering with it more and keeping an eye on it as it progresses.

    Read the article

  • CodePlex Daily Summary for Friday, March 19, 2010

    CodePlex Daily Summary for Friday, March 19, 2010New Projects[Tool] Vczh Visual Studio UnitTest Coverage Analyzer: Analyzing Visual Studio Unittest Coverage Exported XML filecrudwork is a library of reuseable classes for developing .NET applications: crudwork is a collection of reuseable .NET classes and features. If you searched for StpLibrary and landed here, you're in the right place. Origi...CWU Animated AVL Tree Tutorial: This is a silverlight demo of a self-balancing AVL tree. On the original team were CWU undergraduates Eric Brown, Barend Venter, Nick Rushton, Arry...DotNetNuke® Skin Modern: A DotNetNuke Design Challenge skin package submitted to the "Standards" category by Salar Golestanian of SalarO. The skin utilizes both the telerik...DotNetNuke® Skin Monster: A DotNetNuke Design Challenge skin package submitted to the "Personal" category by Jon Edwards of SlumtownHero.co.za. This package uses totally tab...DotNetNuke® Skin Synapse: A DotNetNuke Design Challenge skin package submitted to the "Modern Business" category by Exionyte Solutions. This package features 2 colors with 4...earthworm: Earthworm is a pet project intended as a repository of data access logic, including some ORM, state management and bridging the gap between connect...ema: EMA is a place for collaborative effort to implement a PowerGrid game engine. For more info on PowerGrid the board game see: http://www.boardgamege...Extended SharePoint Web Parts: Extending capabilities of existing SharePoint 2007 Web Parts by inheriting and alterFreedomCraft: Craft development siteG.B SecondLife Sculpter: This is a Sculptor for "secondlife"InfoPath Error Viewer: InfoPath Error Viewer provides an intuitive list to show all errors in the entire InfoPath form. You'll no longer have to find the validation error...LEET (LEET Enhances Exploratory Testing): LEET is a capture-replay tool based on Microsoft’s User Interface Automation Framework. It is targeted at agile teams, and provides support for us...Linq To Entity: Linq,Linq to Entity,EntityMACFBTest: This is a test for a Facebook application.MetaProperties: MetaProperties helps you to create event driven architectures in .NET. It saves you time and it helps you avoid mistakes. It's compatible with WPF ...ownztec web: projeto da ownztec.comParallel Programming Guide: Content for the latest patterns & practices book on design patterns for parallel programming. Downloadable book outline and draft chapters as well ...Perseus - Sistema de Matrícula On-Line: Sistema de matrícula desenvolvido pelo 5º período de Desenvolvimento Web da FACECLA.Project Tru Tiên: Project EL tru tiên, ZhuxianProSysPlus.Net Framework: How do I get the ease and efficiency of my work in VFP (R.I.P. 2010)? The answer is here: the ProSysPlus.Net Framework. Why is it open source? Wh...Quick Anime Renamer: Originally included with AniPlayer X, Quick Anime Renamer easily renames your anime files into a "cleaner" format so you wont get retinal detachment.Simple XNA Button: This is a project of a helper for instancing Simple Buttons in XNA with a ButtonPanel. Its got various features like. Load a Panel from a Plain Tex...SteelVersion - Monitor your .NET Application versioning: SteelVersion helps you to find and store versioning information about .NET assemblies ("Explorer" mode). It also makes it easier to continuously ch...Stellar Results: Astronomical Tracking System for IUPUI CSCI506 - Fall 2007, Team2TheHunterGetsTheDeer: first AIwandal: wandalWeb App Data Architect's CodeCAN: Contains different types of code samples to explore different types of technical solutions/patterns from an architect's point of view.Yet Another GPS: Yet another GPS tracker is a very powerful GPS track application for Windows MobileNew ReleasesASP.Net Client Dependency Framework: v1.0 RC1: ASP.Net Client Dependency has progressed to release candidate 1. With the community feedback and bug reports we've been able to make some great upd...C# FTP Library: FTPLib v1.0.1.1: This release has a couple of small bug fixes as well as the new abilities to specify a port to connect to and to create a new directory with the Cr...crudwork is a library of reuseable classes for developing .NET applications: crudwork 2.2.0.1: crudwork 2.2.0.1 (initial version)DotNetNuke® Skin Modern: Modern Package 1.0.0: A DotNetNuke Design Challenge skin package submitted to the "Standards" category by Salar Golestanian of SalarO. The skin utilizes both the telerik...DotNetNuke® Skinning Extensions: Nav Menu Demo Skins: This very basic skin demonstrates: 1. How to force NAV menu to generate an unordered list menu 2. The creation of a sub menu, both horizontal and ...DotNetNuke® XML: 04.03.05: XML/XSL Module 04.03.05 Release Candidate This is a maintainace release. Full Quallified Namespace avoids conflicts with Namespaces used by Teler...eCommerce by Onex Community Edition: Installer of eCommerce by Onex Community 1.0: Installer of eCommerce by Onex Community 1.0 Last changes: Added integration with Paypal Corrected of adding photos and attachments to products ...eCommerce by Onex Community Edition: Source code of eCommerce by Onex Community 1.0: Changes in version 1.0: Added integration with Paypal Corrected of adding photos and attachments to products Fixed problem with cancellation of...Employee Info Starter Kit: v2.2.0 (Visual Studio 2005-2008): This is a starter kit, which includes very simple user requirements, where we can create, read, update and delete (CRUD) the employee info of a com...Employee Info Starter Kit: v4.0.0.alpha (Visual Studio 2010): Employee Info Starter Kit is a ASP.NET based web application, which includes very simple user requirements, where we can create, read, update and d...Encrypted Notes: Encrypted Notes 1.4: This is the latest version of Encrypted Notes (1.4). It has an installer - it will create a directory 'CPascoe' in My Documents. Once you have ext...Extended SharePoint Web Parts: ContentQueryAdvanced: This .wsp file contains a single web part ContentQueryAdvanced. This web part inherits from ContentQuery web part and adds a ToolPart field for a ...Extended SharePoint Web Parts: Source Code: Zip file includes all the source code used to extend Content Query Web Part, adding a Tool Part field to insert a CAML query/filter/sortFacebook Developer Toolkit: Version 3.02: Updated copyright. No new functionality. Version 3.1 in the works.fleXdoc: template-based server-side document generator (docx): fleXdoc 1.0 (final): fleXdoc consists of a webservice and a (test)client for the service. Make sure you also download the testclient: you can use it to test the install...InfoPath Error Viewer: InfoPath Error Viewer 1.0: This is an intial version of this tool. You can: 1. View all errors in a list. 2. Locate to a binding control of an error field. 3. See the detai...LEET (LEET Enhances Exploratory Testing): LEET Alpha: The first public release of LEET includes the ability to record tests from running GUIs, assist in writing tests manually from a running GUI, edit ...Linq To Entity: Linq to Entity: The Entity Framework enables developers to work with data in the form of domain-specific objects and properties, such as customers and customer add...MDownloader: MDownloader-0.15.8.56699: Fixed peformance and memory usage. Fixed Letitbit provider. Added detecting IMDB, NFO, TV.com... links in RSS Monitor. Supported password len...MetaProperties: MetaProperties 1.0.0.0: This is a multi-targeted release of MetaProperties for the desktop and Silverlight versions of the .NET framework. The desktop version is fully ...Nito.KitchenSink: Version 2: Added a cancelable Stream.CopyTo. Depends on Nito.Linq 0.2. Please report any issues via the Issue Tracker.Project Server 2007 Timesheet AutoStatus Plus: AutoStatusPlus 1.0.1.0: AutoStatusPlus 1.0.1.0 Supported Systems x86 and x64 Project Server 2007 deployments with or without MOSS 2007 Recommended Patchlevels WSS 3.0: ...Project Tru Tiên: Elements-test V1: Mô tả Bản elements.data - có full ID của bản Elemens.data Tru tiên 2 VIệt Nam (V37) - có full ID của bản Elements.data server offline tru tiên (hiệ...Quick Anime Renamer: Quick Anime Renamer v0.1: AniPlayer X v1.4.5 - started 3/18/2010Initial Release!QuickieB2B: Quickie v1.0b: QuickieB2B - made for DEV4FUN competition organized by Microsoft CroatiaSilverlight 3.0 Advanced ToolTipService: Advanced ToolTipService v2.0.2: This release is compiled against the Silverlight 3.0 runtime. A demonstration on how to set the ToolTip content to a property of the DataContext o...Simple XNA Button: XNA Button 1.0: The Main Project. this uses XNA 3.0 but it can be build with lower versions of XNA Framework. This was made using Visual Studio 2008.StoryQ: StoryQ 2.0.3 Library and Converter UI: New features in this release: Tagging and a tag-capable rich html report. The code generator is capable of generating entire classes This relea...The Silverlight Hyper Video Player [http://slhvp.com]: Version 1.0: Version 1.0VCC: Latest build, v2.1.30318.0: Automatic drop of latest buildWord Index extracts words or sentences from Word document according to patterns: Word Index 1.0.1.0 (For Word 2007 and Word 2003): Word Index for Word 2007 & 2003 : WordIndex.msi (Win-Installer Setup for Word Index) Source code : wordindex.codeplex.comV1.0.1.0.zip : (Source co...Yet Another GPS: YAGPS-Alfa.1: Yet another GPS tracker is a very powerful GPS track application for Windows MobileMost Popular ProjectsMetaSharpRawrWBFS ManagerSilverlight ToolkitASP.NET Ajax LibraryMicrosoft SQL Server Product Samples: DatabaseAJAX Control ToolkitLiveUpload to FacebookWindows Presentation Foundation (WPF)ASP.NETMost Active ProjectsLINQ to TwitterRawrOData SDK for PHPjQuery Library for SharePoint Web ServicesDirectQOpen Data App Framework (ODAF)patterns & practices – Enterprise LibraryBlogEngine.NETPHPExcelNB_Store - Free DotNetNuke Ecommerce Catalog Module

    Read the article

< Previous Page | 37 38 39 40 41 42  | Next Page >