Search Results

Search found 4216 results on 169 pages for 'dr dot'.

Page 162/169 | < Previous Page | 158 159 160 161 162 163 164 165 166 167 168 169  | Next Page >

  • Introducing Oracle VM Server for SPARC

    - by Honglin Su
    As you are watching Oracle's Virtualization Strategy Webcast and exploring the great virtualization offerings of Oracle VM product line, I'd like to introduce Oracle VM Server for SPARC --  highly efficient, enterprise-class virtualization solution for Sun SPARC Enterprise Systems with Chip Multithreading (CMT) technology. Oracle VM Server for SPARC, previously called Sun Logical Domains, leverages the built-in SPARC hypervisor to subdivide supported platforms' resources (CPUs, memory, network, and storage) by creating partitions called logical (or virtual) domains. Each logical domain can run an independent operating system. Oracle VM Server for SPARC provides the flexibility to deploy multiple Oracle Solaris operating systems simultaneously on a single platform. Oracle VM Server also allows you to create up to 128 virtual servers on one system to take advantage of the massive thread scale offered by the CMT architecture. Oracle VM Server for SPARC integrates both the industry-leading CMT capability of the UltraSPARC T1, T2 and T2 Plus processors and the Oracle Solaris operating system. This combination helps to increase flexibility, isolate workload processing, and improve the potential for maximum server utilization. Oracle VM Server for SPARC delivers the following: Leading Price/Performance - The low-overhead architecture provides scalable performance under increasing workloads without additional license cost. This enables you to meet the most aggressive price/performance requirement Advanced RAS - Each logical domain is an entirely independent virtual machine with its own OS. It supports virtual disk mutipathing and failover as well as faster network failover with link-based IP multipathing (IPMP) support. Moreover, it's fully integrated with Solaris FMA (Fault Management Architecture), which enables predictive self healing. CPU Dynamic Resource Management (DRM) - Enable your resource management policy and domain workload to trigger the automatic addition and removal of CPUs. This ability helps you to better align with your IT and business priorities. Enhanced Domain Migrations - Perform domain migrations interactively and non-interactively to bring more flexibility to the management of your virtualized environment. Improve active domain migration performance by compressing memory transfers and taking advantage of cryptographic acceleration hardware. These methods provide faster migration for load balancing, power saving, and planned maintenance. Dynamic Crypto Control - Dynamically add and remove cryptographic units (aka MAU) to and from active domains. Also, migrate active domains that have cryptographic units. Physical-to-virtual (P2V) Conversion - Quickly convert an existing SPARC server running the Oracle Solaris 8, 9 or 10 OS into a virtualized Oracle Solaris 10 image. Use this image to facilitate OS migration into the virtualized environment. Virtual I/O Dynamic Reconfiguration (DR) - Add and remove virtual I/O services and devices without needing to reboot the system. CPU Power Management - Implement power saving by disabling each core on a Sun UltraSPARC T2 or T2 Plus processor that has all of its CPU threads idle. Advanced Network Configuration - Configure the following network features to obtain more flexible network configurations, higher performance, and scalability: Jumbo frames, VLANs, virtual switches for link aggregations, and network interface unit (NIU) hybrid I/O. Official Certification Based On Real-World Testing - Use Oracle VM Server for SPARC with the most sophisticated enterprise workloads under real-world conditions, including Oracle Real Application Clusters (RAC). Affordable, Full-Stack Enterprise Class Support - Obtain worldwide support from Oracle for the entire virtualization environment and workloads together. The support covers hardware, firmware, OS, virtualization, and the software stack. SPARC Server Virtualization Oracle offers a full portfolio of virtualization solutions to address your needs. SPARC is the leading platform to have the hard partitioning capability that provides the physical isolation needed to run independent operating systems. Many customers have already used Oracle Solaris Containers for application isolation. Oracle VM Server for SPARC provides another important feature with OS isolation. This gives you the flexibility to deploy multiple operating systems simultaneously on a single Sun SPARC T-Series server with finer granularity for computing resources.  For SPARC CMT processors, the natural level of granularity is an execution thread, not a time-sliced microsecond of execution resources. Each CPU thread can be treated as an independent virtual processor. The scheduler is naturally built into the CPU for lower overhead and higher performance. Your organizations can couple Oracle Solaris Containers and Oracle VM Server for SPARC with the breakthrough space and energy savings afforded by Sun SPARC Enterprise systems with CMT technology to deliver a more agile, responsive, and low-cost environment. Management with Oracle Enterprise Manager Ops Center The Oracle Enterprise Manager Ops Center Virtualization Management Pack provides full lifecycle management of virtual guests, including Oracle VM Server for SPARC and Oracle Solaris Containers. It helps you streamline operations and reduce downtime. Together, the Virtualization Management Pack and the Ops Center Provisioning and Patch Automation Pack provide an end-to-end management solution for physical and virtual systems through a single web-based console. This solution automates the lifecycle management of physical and virtual systems and is the most effective systems management solution for Oracle's Sun infrastructure. Ease of Deployment with Configuration Assistant The Oracle VM Server for SPARC Configuration Assistant can help you easily create logical domains. After gathering the configuration data, the Configuration Assistant determines the best way to create a deployment to suit your requirements. The Configuration Assistant is available as both a graphical user interface (GUI) and terminal-based tool. Oracle Solaris Cluster HA Support The Oracle Solaris Cluster HA for Oracle VM Server for SPARC data service provides a mechanism for orderly startup and shutdown, fault monitoring and automatic failover of the Oracle VM Server guest domain service. In addition, applications that run on a logical domain, as well as its resources and dependencies can be controlled and managed independently. These are managed as if they were running in a classical Solaris Cluster hardware node. Supported Systems Oracle VM Server for SPARC is supported on all Sun SPARC Enterprise Systems with CMT technology. UltraSPARC T2 Plus Systems ·   Sun SPARC Enterprise T5140 Server ·   Sun SPARC Enterprise T5240 Server ·   Sun SPARC Enterprise T5440 Server ·   Sun Netra T5440 Server ·   Sun Blade T6340 Server Module ·   Sun Netra T6340 Server Module UltraSPARC T2 Systems ·   Sun SPARC Enterprise T5120 Server ·   Sun SPARC Enterprise T5220 Server ·   Sun Netra T5220 Server ·   Sun Blade T6320 Server Module ·   Sun Netra CP3260 ATCA Blade Server Note that UltraSPARC T1 systems are supported on earlier versions of the software.Sun SPARC Enterprise Systems with CMT technology come with the right to use (RTU) of Oracle VM Server, and the software is pre-installed. If you have the systems under warranty or with support, you can download the software and system firmware as well as their updates. Oracle Premier Support for Systems provides fully-integrated support for your server hardware, firmware, OS, and virtualization software. Visit oracle.com/support for information about Oracle's support offerings for Sun systems. For more information about Oracle's virtualization offerings, visit oracle.com/virtualization.

    Read the article

  • FFmpeg extract clip - stream frame rate differs from container frame rate (x264, aac)

    - by fideli
    Summary H.264 video seems to have a really high frame rate that requires a scaling factor to the applied to the duration of video that I'm trying to extract (900x lower). Body I'm trying to extract a clip from a movie that I have in MP4 format (created using Handbrake). After trying mencoder and VLC, I decided to give FFmpeg a shot since it was the least troublesome when it came to copying the codecs. That is, compared to mencoder and VLC, the resulting file was still playable in QuickTime (I know about Perian, etc, I'm just trying to learn how all this works). Anyway, my command was as follows: ffmpeg -ss 01:15:51 -t 00:05:59 -i outofsight.mp4 \ -acodec copy -vcodec copy clip.mp4 During the copy, The following comes up: Seems stream 0 codec frame rate differs from container frame rate: 45000.00 (45000/1) -> 25.00 (25/1) Input #0, mov,mp4,m4a,3gp,3g2,mj2, from outofsight.mp4': Duration: 01:57:42.10, start: 0.000000, bitrate: 830 kb/s Stream #0.0(und): Video: h264, yuv420p, 720x384, 25 tbr, 22500 tbn, 45k tbc Stream #0.1(eng): Audio: aac, 48000 Hz, stereo, s16 Output #0, mp4, to 'out.mp4': Stream #0.0(und): Video: libx264, yuv420p, 720x384, q=2-31, 90k tbn, 22500 tbc Stream #0.1(eng): Audio: libfaac, 48000 Hz, stereo, s16 Stream mapping: Stream #0.0 -> #0.0 Stream #0.1 -> #0.1 Press [q] to stop encoding frame= 2591 fps=2349 q=-1.0 size= 8144kB time=101.60 bitrate= 656.7kbits/s … Instead of a 5:59 duration clip, I get the entire rest of the movie. So, to test this, I ran the ffmpeg command with -t 00:00:01. What I got was exactly a 15:00 minute clip. So I did some black box engineering and decided to scale my -t option by calculating what value to enter given that 1 second was interpreted as 900 s. For my desired 359 s clip, I calculated 0.399 s and so my ffmpeg command became: ffmpeg -ss 01:15.51 -t 00:00:00.399 -i outofsight.mp4 \ -acodec copy -vcodec copy clip.mp4 This works, but I have no idea why the duration is scaled by 900. Investigating further, each ffmpeg run has the line: Seems stream 0 codec frame rate differs from container frame rate: 45000.00 (45000/1) -> 25.00 (25/1) 45000/25 = 1800. Must be a relation somewhere. Somehow, the obscenely high frame rate is causing issues with the timing. How is that frame rate so high? The best part about this is that the resulting clip.mp4 has the exact same feature (due to the copied video codec), and taking further clips from this needs the same scaling for the -t duration option. Therefore, I've made it available for anyone willing to check this out. Appendix The preamble for ffmpeg on my system (built using MacPorts ffmpeg port): FFmpeg version 0.5, Copyright (c) 2000-2009 Fabrice Bellard, et al. configuration: --prefix=/opt/local --disable-vhook --enable-gpl --enable-postproc --enable-swscale --enable-avfilter --enable-avfilter-lavf --enable-libmp3lame --enable-libvorbis --enable-libtheora --enable-libdirac --enable-libschroedinger --enable-libfaac --enable-libfaad --enable-libxvid --enable-libx264 --mandir=/opt/local/share/man --enable-shared --enable-pthreads --cc=/usr/bin/gcc-4.2 --arch=x86_64 libavutil 49.15. 0 / 49.15. 0 libavcodec 52.20. 0 / 52.20. 0 libavformat 52.31. 0 / 52.31. 0 libavdevice 52. 1. 0 / 52. 1. 0 libavfilter 1. 4. 0 / 1. 4. 0 libswscale 1. 7. 1 / 1. 7. 1 libpostproc 51. 2. 0 / 51. 2. 0 built on Jan 4 2010 21:51:51, gcc: 4.2.1 (Apple Inc. build 5646) (dot 1)

    Read the article

  • Soapi.CS : A fully relational fluent .NET Stack Exchange API client library

    - by Sky Sanders
    Soapi.CS for .Net / Silverlight / Windows Phone 7 / Mono as easy as breathing...: var context = new ApiContext(apiKey).Initialize(false); Question thisPost = context.Official .StackApps .Questions.ById(386) .WithComments(true) .First(); Console.WriteLine(thisPost.Title); thisPost .Owner .Questions .PageSize(5) .Sort(PostSort.Votes) .ToList() .ForEach(q=> { Console.WriteLine("\t" + q.Score + "\t" + q.Title); q.Timeline.ToList().ForEach(t=> Console.WriteLine("\t\t" + t.TimelineType + "\t" + t.Owner.DisplayName)); Console.WriteLine(); }); // if you can think it, you can get it. Output Soapi.CS : A fully relational fluent .NET Stack Exchange API client library 21 Soapi.CS : A fully relational fluent .NET Stack Exchange API client library Revision code poet Revision code poet Votes code poet Votes code poet Revision code poet Revision code poet Revision code poet Votes code poet Votes code poet Votes code poet Revision code poet Revision code poet Revision code poet Revision code poet Revision code poet Revision code poet Revision code poet Revision code poet Revision code poet Revision code poet Votes code poet Comment code poet Revision code poet Votes code poet Revision code poet Revision code poet Revision code poet Answer code poet Revision code poet Revision code poet 14 SOAPI-WATCH: A realtime service that notifies subscribers via twitter when the API changes in any way. Votes code poet Revision code poet Votes code poet Comment code poet Comment code poet Comment code poet Votes lfoust Votes code poet Comment code poet Comment code poet Comment code poet Comment code poet Revision code poet Comment lfoust Votes code poet Revision code poet Votes code poet Votes lfoust Votes code poet Revision code poet Comment Dave DeLong Revision code poet Revision code poet Votes code poet Comment lfoust Comment Dave DeLong Comment lfoust Comment lfoust Comment Dave DeLong Revision code poet 11 SOAPI-EXPLORE: Self-updating single page JavaSript API test harness Votes code poet Votes code poet Votes code poet Votes code poet Votes code poet Comment code poet Revision code poet Votes code poet Revision code poet Revision code poet Revision code poet Comment code poet Revision code poet Votes code poet Comment code poet Question code poet Votes code poet 11 Soapi.JS V1.0: fluent JavaScript wrapper for the StackOverflow API Comment George Edison Comment George Edison Comment George Edison Comment George Edison Comment George Edison Comment George Edison Answer George Edison Votes code poet Votes code poet Votes code poet Votes code poet Revision code poet Revision code poet Answer code poet Comment code poet Revision code poet Comment code poet Comment code poet Comment code poet Revision code poet Revision code poet Votes code poet Votes code poet Votes code poet Votes code poet Comment code poet Comment code poet Comment code poet Comment code poet Comment code poet 9 SOAPI-DIFF: Your app broke? Check SOAPI-DIFF to find out what changed in the API Votes code poet Revision code poet Comment Dennis Williamson Answer Dennis Williamson Votes code poet Votes Dennis Williamson Comment code poet Question code poet Votes code poet About A robust, fully relational, easy to use, strongly typed, end-to-end StackOverflow API Client Library. Out of the box, Soapi provides you with a robust client library that abstracts away most all of the messy details of consuming the API and lets you concentrate on implementing your ideas. A few features include: A fully relational model of the API data set exposed via a fully 'dot navigable' IEnumerable (LINQ) implementation. Simply tell Soapi what you want and it will get it for you. e.g. "On my first question, from the author of the first comment, get the first page of comments by that person on any post" my.Questions.First().Comments.First().Owner.Comments.ToList(); (yes this is a real expression that returns the data as expressed!) Full coverage of the API, all routes and all parameters with an intuitive syntax. Strongly typed Domain Data Objects for all API data structures. Eager and Lazy Loading of 'stub' objects. Eager\Lazy loading may be disabled. When finer grained control of requests is desired, the core RouteMap objects may be leveraged to request data from any of the API paths using all available parameters as documented on the help pages. A rich Asynchronous implementation. A configurable request cache to reduce unnecessary network traffic and to simplify your usage logic. There is no need to go out of your way to be frugal. You may set a distinct cache duration for any particular route. A configurable request throttle to ensure compliance with the api terms of usage and to simplify your code in that you do not have to worry about and respond to 50X errors. The RequestCache and Throttled Queue are thread-safe, so can make as many requests as you like from as many threads as you like as fast as you like and not worry about abusing the api or having to write reams of management/compensation code. Configurable retry threshold that will, by default, make up to 3 attempts to retrieve a request before failing. Every request made by Soapi is properly formed and directed so most any http error will be the result of a timeout or other network infrastructure. A retry buffer provides a level of fault tolerance that you can rely on. An almost identical javascript library, Soapi.JS, and it's full figured big brother, Soapi.JS2, that will enable you to leverage your server cycles and bandwidth for only those tasks that require it and offload things like status updates to the client's browser. License Licensed GPL Version 2 license. Why is Soapi.CS GPL? Can I get an LGPL license for Soapi.CS? (hint: probably) Platforms .NET 3.5 .NET 4.0 Silverlight 3 Silverlight 4 Windows Phone 7 Mono Download Source code lives @ http://soapics.codeplex.com. Binary releases are forthcoming. codeplex is acting up again. get the source and binaries @ http://bitbucket.org/bitpusher/soapi.cs/downloads The source is C# 3.5. and includes projects and solutions for the following IDEs Visual Studio 2008 Visual Studio 2010 ModoDevelop 2.4 Documentation Full documentation is available at http://soapi.info/help/cs/index.aspx Sample Code / Usage Examples Sample code and usage examples will be added as answers to this question. Full API Coverage all API routes are covered Full Parameter Parity If the API exposes it, Soapi giftwraps it for you. Building a simple app with Soapi.CS - a simple app that gathers all traces of a user in the whole stackiverse. Fluent Configuration - Setting up a Soapi.ApiContext could not be easier Bulk Data Import - A tiny app that quickly loads a SQLite data file with all users in the stackiverse. Paged Results - Soapi.CS transparently handles multi-page operations. Asynchronous Requests - Soapi.CS provides a rich asynchronous model that is especially useful when writing api apps in Silverlight or Windows Phone 7. Caching and Throttling - how and why Apps that use Soapi.CS Soapi.FindUser - .net utility for locating a user anywhere in the stackiverse Soapi.Explore - The entire API at your command Soapi.LastSeen - List users by last access time Add your app/site here - I know you are out there ;-) if you are not comfortable editing this post, simply add a comment and I will add it. The CS/SL/WP7/MONO libraries all compile the same code and with the exception of environmental considerations of Silverlight, the code samples are valid for all libraries. You may also find guidance in the test suites. More information on the SOAPI eco-system. Contact This library is currently the effort of me, Sky Sanders (code poet) and can be reached at gmail - sky.sanders Any who are interested in improving this library are welcome. Support Soapi You can help support this project by voting for Soapi's Open Source Ad post For more information about the origins of Soapi.CS and the rest of the Soapi eco-system see What is Soapi and why should I care?

    Read the article

  • Debian, 6rd tunnel, and connection troubles

    - by Chris B
    Long story short I am having issues with IPv6 using a 6rd tunnel with my ISP, charter business. They offer a 6rd tunnel that I think I have properly set up, but the server doesn’t reply to every ipv6 request. When the server has the network interfaces idle with no traffic for about 10 minutes, then IPv6 stops accepting inbound connections. to re-allow it, I must go into the server, and make it do a outbound ipv6 connection (normally a ping) to start it back up. Whats weird though i that if I run iptraf when its not working, it still shows a inbound ipv6 packet… the server is just not replying, and I can’t figure out why. Also, if I try to access my server over IPv6 from a house about 1 mile away on the same ISP, it is never able to connect. it always times out, but again the iptraf shows a ipv6 inbound packet. Again, it just does not reply. To test if my server is accessible through IPv6 I always have to use my vzw 4g phone (they use IPv6) or ipv6proxy dot net. Here is all of the configuration information my ISP gives on there tunnel server: 6rd Prefix = 2602:100::/32 Border Relay Address = 68.114.165.1 6rd prefix length = 32 IPv4 mask length = 0 Here is my /etc/network/interfaces for ipv6 (used x's to block real addresses) auto charterv6 iface charterv6 inet6 v4tunnel address 2602:100:189f:xxxx::1 netmask 32 ttl 64 gateway ::68.114.165.1 endpoint 68.114.165.1 local 24.159.218.xxx up ip link set mtu 1280 dev charterv6 here is my iptables config filter :INPUT DROP [0:0] :fail2ban-ssh – [0:0] :OUTPUT ACCEPT [0:0] :FORWARD DROP [0:0] :hold – [0:0] -A INPUT -p tcp -m tcp —dport 22 -j fail2ban-ssh -A INPUT -m state —state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p tcp -m multiport -j ACCEPT —dports 80,443,25,465,110,995,143,993,587,465,22 -A INPUT -i lo -j ACCEPT -A INPUT -p tcp -m tcp —dport 10000 -j ACCEPT -A INPUT -p tcp -m tcp —dport 5900:5910 -j ACCEPT -A fail2ban-ssh -j RETURN -A INPUT -p icmp -j ACCEPT COMMIT and last here is my ip6tables firewall config filter :INPUT DROP [1653:339023] :FORWARD DROP [0:0] :OUTPUT ACCEPT [60141:13757903] :hold – [0:0] -A INPUT -m state —state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p tcp -m multiport —dports 80,443,25,465,110,995,143,993,587,465,22 -j ACCEPT -A INPUT -i lo -j ACCEPT -A INPUT -p tcp -m tcp —dport 10000 -j ACCEPT -A INPUT -p tcp -m tcp —dport 5900:5910 -j ACCEPT -A INPUT -p ipv6-icmp -j ACCEPT COMMIT So Summary: 1.iptraf always shows IPv6 traffic, so its always making it to the server 2.server stops replying on ipv6 after no traffic for awhile (10 minutesish) until a outbound connection is made, then the process repeats. 3.server is NEVER accessable vi same ISP (yet iptraf still shows ipv6 request) Notes: When I try to access it from the same ISP from across town, even with iptables and ip6tables allowing ALL inbound traffic, this is what iptraf shows. IPv6 (92 bytes) from 97.92.18.xxx to 24.159.218.xxx on eth0 ICMP dest unrch (port) (120 bytes) from 24.159.218.xxx to 97.92.18.xxx on eth1 its strange, like its trying to forward to LAN? (eth1 is LAN, eth0 is WAN) even with the IPv6 address being set in the hosts file to the servers domain name. With iptables set up normally with the above configurations it only says this: IPv6 (100 bytes) from 97.92.18.xxx to 24.159.218.xxx on eth0 Im REALLY stuck on this, and any help would be GREATLY appreciated.

    Read the article

  • MVC 3 AdditionalMetadata Attribute with ViewBag to Render Dynamic UI

    - by Steve Michelotti
    A few months ago I blogged about using Model metadata to render a dynamic UI in MVC 2. The scenario in the post was that we might have a view model where the questions are conditionally displayed and therefore a dynamic UI is needed. To recap the previous post, the solution was to use a custom attribute called [QuestionId] in conjunction with an “ApplicableQuestions” collection to identify whether each question should be displayed. This allowed me to have a view model that looked like this: 1: [UIHint("ScalarQuestion")] 2: [DisplayName("First Name")] 3: [QuestionId("NB0021")] 4: public string FirstName { get; set; } 5: 6: [UIHint("ScalarQuestion")] 7: [DisplayName("Last Name")] 8: [QuestionId("NB0022")] 9: public string LastName { get; set; } 10: 11: [UIHint("ScalarQuestion")] 12: [QuestionId("NB0023")] 13: public int Age { get; set; } 14: 15: public IEnumerable<string> ApplicableQuestions { get; set; } At the same time, I was able to avoid repetitive IF statements for every single question in my view: 1: <%: Html.EditorFor(m => m.FirstName, new { applicableQuestions = Model.ApplicableQuestions })%> 2: <%: Html.EditorFor(m => m.LastName, new { applicableQuestions = Model.ApplicableQuestions })%> 3: <%: Html.EditorFor(m => m.Age, new { applicableQuestions = Model.ApplicableQuestions })%> by creating an Editor Template called “ScalarQuestion” that encapsulated the IF statement: 1: <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl" %> 2: <%@ Import Namespace="DynamicQuestions.Models" %> 3: <%@ Import Namespace="System.Linq" %> 4: <% 5: var applicableQuestions = this.ViewData["applicableQuestions"] as IEnumerable<string>; 6: var questionAttr = this.ViewData.ModelMetadata.ContainerType.GetProperty(this.ViewData.ModelMetadata.PropertyName).GetCustomAttributes(typeof(QuestionIdAttribute), true) as QuestionIdAttribute[]; 7: string questionId = null; 8: if (questionAttr.Length > 0) 9: { 10: questionId = questionAttr[0].Id; 11: } 12: if (questionId != null && applicableQuestions.Contains(questionId)) { %> 13: <div> 14: <%: Html.Label("") %> 15: <%: Html.TextBox("", this.Model)%> 16: </div> 17: <% } %> You might want to go back and read the full post in order to get the full context. MVC 3 offers a couple of new features that make this scenario more elegant to implement. The first step is to use the new [AdditionalMetadata] attribute which, so far, appears to be an under appreciated new feature of MVC 3. With this attribute, I don’t need my custom [QuestionId] attribute anymore - now I can just write my view model like this: 1: [UIHint("ScalarQuestion")] 2: [DisplayName("First Name")] 3: [AdditionalMetadata("QuestionId", "NB0021")] 4: public string FirstName { get; set; } 5:   6: [UIHint("ScalarQuestion")] 7: [DisplayName("Last Name")] 8: [AdditionalMetadata("QuestionId", "NB0022")] 9: public string LastName { get; set; } 10:   11: [UIHint("ScalarQuestion")] 12: [AdditionalMetadata("QuestionId", "NB0023")] 13: public int Age { get; set; } Thus far, the documentation seems to be pretty sparse on the AdditionalMetadata attribute. It’s buried in the Other New Features section of the MVC 3 home page and, after showing the attribute on a view model property, it just says, “This metadata is made available to any display or editor template when a product view model is rendered. It is up to you to interpret the metadata information.” But what exactly does it look like for me to “interpret the metadata information”? Well, it turns out it makes the view much easier to work with. Here is the re-implemented ScalarQuestion template updated for MVC 3 and Razor: 1: @{ 2: object questionId; 3: ViewData.ModelMetadata.AdditionalValues.TryGetValue("QuestionId", out questionId); 4: if (ViewBag.applicableQuestions.Contains((string)questionId)) { 5: <div> 6: @Html.LabelFor(m => m) 7: @Html.TextBoxFor(m => m) 8: </div> 9: } 10: } So we’ve gone from 17 lines of code (in the MVC 2 version) to about 7-8 lines of code here. The first thing to notice is that in MVC 3 we now have a property called “AdditionalValues” that hangs off of the ModelMetadata property. This is automatically populated by any [AdditionalMetadata] attributes on the property. There is no more need for me to explicitly write Reflection code to GetCustomAttributes() and then check to see if those attributes were present. I can just call TryGetValue() on the dictionary to see if they were present. Secondly, the “applicableQuestions” anonymous type that I passed in from the calling view – in MVC 3 I now have a dynamic ViewBag property where I can just “dot into” the applicableQuestions with a nicer syntax than dictionary square bracket syntax. And there’s no problems calling the Contains() method on this dynamic object because at runtime the DLR has resolved that it is a generic List<string>. At this point you might be saying that, yes the view got much nicer than the MVC 2 version, but my view model got slightly worse.  In the previous version I had a nice [QuestionId] attribute but now, with the [AdditionalMetadata] attribute, I have to type the string “QuestionId” for every single property and hope that I don’t make a typo. Well, the good news is that it’s easy to create your own attributes that can participate in the metadata’s additional values. The key is that the attribute must implement that IMetadataAware interface and populate the AdditionalValues dictionary in the OnMetadataCreated() method: 1: public class QuestionIdAttribute : Attribute, IMetadataAware 2: { 3: public string Id { get; set; } 4:   5: public QuestionIdAttribute(string id) 6: { 7: this.Id = id; 8: } 9:   10: public void OnMetadataCreated(ModelMetadata metadata) 11: { 12: metadata.AdditionalValues["QuestionId"] = this.Id; 13: } 14: } This now allows me to encapuslate my “QuestionId” string in just one place and get back to my original attribute which can be used like this: [QuestionId(“NB0021”)]. The [AdditionalMetadata] attribute is a powerful and under-appreciated new feature of MVC 3. Combined with the dynamic ViewBag property, you can do some really interesting things with your applications with less code and ceremony.

    Read the article

  • Create PDF document using iTextSharp in ASP.Net 4.0 and MemoryMappedFile

    - by sreejukg
    In this article I am going to demonstrate how ASP.Net developers can programmatically create PDF documents using iTextSharp. iTextSharp is a software component, that allows developers to programmatically create or manipulate PDF documents. Also this article discusses the process of creating in-memory file, read/write data from/to the in-memory file utilizing the new feature MemoryMappedFile. I have a database of users, where I need to send a notice to all my users as a PDF document. The sending mail part of it is not covered in this article. The PDF document will contain the company letter head, to make it more official. I have a list of users stored in a database table named “tblusers”. For each user I need to send customized message addressed to them personally. The database structure for the users is give below. id Title Full Name 1 Mr. Sreeju Nair K. G. 2 Dr. Alberto Mathews 3 Prof. Venketachalam Now I am going to generate the pdf document that contains some message to the user, in the following format. Dear <Title> <FullName>, The message for the user. Regards, Administrator Also I have an image, bg.jpg that contains the background for the document generated. I have created .Net 4.0 empty web application project named “iTextSharpSample”. First thing I need to do is to download the iTextSharp dll from the source forge. You can find the url for the download here. http://sourceforge.net/projects/itextsharp/files/ I have extracted the Zip file and added the itextsharp.dll as a reference to my project. Also I have added a web form named default.aspx to my project. After doing all this, the solution explorer have the following view. In the default.aspx page, I inserted one grid view and associated it with a SQL Data source control that bind data from tblusers. I have added a button column in the grid view with text “generate pdf”. The output of the page in the browser is as follows. Now I am going to create a pdf document when the user clicking on the Generate PDF button. As I mentioned before, I am going to work with the file in memory, I am not going to create a file in the disk. I added an event handler for button by specifying onrowcommand event handler. My gridview source looks like <asp:GridView ID="GridView1" runat="server" AutoGenerateColumns="False" DataSourceID="SqlDataSource1" Width="481px" CellPadding="4" ForeColor="#333333" GridLines="None" onrowcommand="Generate_PDF" > ………………………………………………………………………….. ………………………………………………………………………….. </asp:GridView> In the code behind, I wrote the corresponding event handler. protected void Generate_PDF(object sender, GridViewCommandEventArgs e) { // The button click event handler code. // I am going to explain the code for this section in the remaining part of the article } The Generate_PDF method is straight forward, It get the title, fullname and message to some variables, then create the pdf using these variables. The code for getting data from the grid view is as follows // get the row index stored in the CommandArgument property int index = Convert.ToInt32(e.CommandArgument); // get the GridViewRow where the command is raised GridViewRow selectedRow = ((GridView)e.CommandSource).Rows[index]; string title = selectedRow.Cells[1].Text; string fullname = selectedRow.Cells[2].Text; string msg = @"There are some changes in the company policy, due to this matter you need to submit your latest address to us. Please update your contact details / personnal details by visiting the member area of the website. ................................... "; since I don’t want to save the file in the disk, I am going the new feature introduced in .Net framework 4, called Memory-Mapped Files. Using Memory-Mapped mapped file, you can created non-persisted memory mapped files, that are not associated with a file in a disk. So I am going to create a temporary file in memory, add the pdf content to it, then write it to the output stream. To read more about MemoryMappedFile, read this msdn article http://msdn.microsoft.com/en-us/library/dd997372.aspx The below portion of the code using MemoryMappedFile object to create a test pdf document in memory and perform read/write operation on file. The CreateViewStream() object will give you a stream that can be used to read or write data to/from file. The code is very straight forward and I included comment so that you can understand the code. using (MemoryMappedFile mmf = MemoryMappedFile.CreateNew("test1.pdf", 1000000)) { // Create a new pdf document object using the constructor. The parameters passed are document size, left margin, right margin, top margin and bottom margin. iTextSharp.text.Document d = new iTextSharp.text.Document(PageSize.A4, 72,72,172,72); //get an instance of the memory mapped file to stream object so that user can write to this using (MemoryMappedViewStream stream = mmf.CreateViewStream()) { // associate the document to the stream. PdfWriter.GetInstance(d, stream); /* add an image as bg*/ iTextSharp.text.Image jpg = iTextSharp.text.Image.GetInstance(Server.MapPath("Image/bg.png")); jpg.Alignment = iTextSharp.text.Image.UNDERLYING; jpg.SetAbsolutePosition(0, 0); //this is the size of my background letter head image. the size is in points. this will fit to A4 size document. jpg.ScaleToFit(595, 842); d.Open(); d.Add(jpg); d.Add(new Paragraph(String.Format("Dear {0} {1},", title, fullname))); d.Add(new Paragraph("\n")); d.Add(new Paragraph(msg)); d.Add(new Paragraph("\n")); d.Add(new Paragraph(String.Format("Administrator"))); d.Close(); } //read the file data byte[] b; using (MemoryMappedViewStream stream = mmf.CreateViewStream()) { BinaryReader rdr = new BinaryReader(stream); b = new byte[mmf.CreateViewStream().Length]; rdr.Read(b, 0, (int)mmf.CreateViewStream().Length); } Response.Clear(); Response.ContentType = "Application/pdf"; Response.BinaryWrite(b); Response.End(); } Press ctrl + f5 to run the application. First I got the user list. Click on the generate pdf icon. The created looks as follows. Summary: Creating pdf document using iTextSharp is easy. You will get lot of information while surfing the www. Some useful resources and references are mentioned below http://itextsharp.com/ http://www.mikesdotnetting.com/Article/82/iTextSharp-Adding-Text-with-Chunks-Phrases-and-Paragraphs http://somewebguy.wordpress.com/2009/05/08/itextsharp-simplify-your-html-to-pdf-creation/ Hope you enjoyed the article.

    Read the article

  • Company Review: Google Products

    Google, Inc offers an array of products and services to all of its end-users. However their search capabilities are the foundation for Google’s current success and their primary business focus. Currently, Google offers over twenty different search applications that allow users to search the internet for books, maps, videos, images, products and much more. Their product decisions have allowed users demands to be met while focusing on the free based model. This allows users to access Google data free of charge and indirectly gives Google a strong competitive advantage of other competitors along with the accuracy of the search results. According to Google, Inc, they offer the following types of searching capabilities: Alerts Get email updates on the topics of your choice Blog Search Find blogs on your favorite topics  Books Search the full text of books  Custom Search Create a customized search experience for your community  Desktop Search and personalize your computer  Dictionary Search for definitions of words and phrases Directory Search the web, organized by topic or category Earth Explore the world from your computer Finance Business info, news and interactive charts GOOG-411 Find and connect for free with businesses from your phone  Images Search for images on the web Maps View maps and directions News Search thousands of news stories Patent Search Search the full text of US Patents Product Search Search for stuff to buy Scholar Search scholarly papers Toolbar Add a search box to your browser Trends Explore past and present search trends Videos Search for videos on the web Web Search Search billions of web pages Web Search Features Find movies, music, stocks, books and more mapping Google’s free based business model is only one way it differentiates itself from its competition. There is also a strong focus on the accuracy of search results and the speed in which they are returned to the end-user. Quality function deployment (QFD) is a structured method used to help connect user needs to the design features of a project proposed to address those needs. This method is particularly useful in accounting for needs that are not easily articulated or precisely defined according to the U. S. Department of Transportation Federal Highway Administration. Due to the fact that QFD is so customer driven Google is always in a constant state of change in attempt to reengineer its search algorithms, and other dependant systems so that end-users requirements are constantly being met. Value engineering is a key example of this, Google is constantly trying to improve all aspects of its products, improve system maintainability, and system interoperability. Bridgefield Group defines value engineering as an organized methodology that identifies and selects the lowest lifecycle cost options in design, materials and processes that achieves the desired level of performance, reliability and customer satisfaction. In addition, it seeks to remove unnecessary costs in the above areas and is often a joint effort with cross-functional internal teams and relevant suppliers. Common issues that appear when developing large scale systems like Google’s search applications include modular design of a product and/or service and providing accurate value analysis. A design approach that adheres to four fundamental tenets of cohesiveness, encapsulation, self-containment, and high binding to design a system component as an independently operable unit subject to change is how the Open System Joint Task Force defines modular design. More specifically M. S. Schmaltz defines modular software design as having a large collection of statements strung together in one partition of in-line code; we segment or divide the statements into logical groups called modules. Each module performs one or two tasks, and then passes control to another module. By breaking up the code into "bite-sized chunks", so to speak, we are able to better control the flow of data and control. This is especially true in large software systems. Value analysis is a process to evaluate products and services based on effectiveness, safety, and cost. Value analysis involves assessing the quality as well as the cost of a product or service as defined by the Healthcare Financial Management Association.  “Operations Management deals with the design and management of products, processes, services and supply chains. It considers the acquisition, development, and utilization of resources that firms need to deliver the goods and services their clients want.” (MIT,2010) Google, Inc encourages an open environment between all employees, also known as Googlers. This is reinforced by a cross-section team or cross-functional teams comprised from multiple departments assigned to every project so that every department like marketing, finance, and quality assurance has input on every project. In addition, Google is known for their openness to new ideas regardless of the status or seniority of an employee. In fact, Google allows for 20% of an employee’s time can be devoted to developing new ideas and/or pet projects. HumTech.com defines a cross-functional team as a collection of people with varied levels of skills and experience brought together to accomplish a task. As the name implies, Cross-Functional Team members come from different organizational units. Cross-Functional Teams may be permanent or ad hoc. Google’s search application product strategy primarily focuses on mass customization. This is allows Google to create a base search application and allows results to be returned to the end-users quickly based on specific parameters and search settings. In addition, they also store the data that is returned in case other desire the same results based on other end-users supplying the same customized settings. This allows Google to appear to render search results in virtually real-time to the user while allowing for complete customization of the searching criteria. Greg Vogl, a professor at Uganda Martyrs University, defines mass customization as when a business gives its customers the opportunity to tailor its products or services to the customer's specifications. The IT staff at Google play a key role in ensuring that the search application’s product strategy is maintained simply because the IT staff designs, develops, and maintains all of their proprietary applications. In fact, they also maintain all network infrastructure to ensure that it is available to all end-users. References: http://www.google.com/intl/en/options/ http://ops.fhwa.dot.gov/freight/publications/ftat_user_guide/sec5.htm http://www.bridgefieldgroup.com/bridgefieldgroup/glos9.htm#V http://www.acq.osd.mil/osjtf/termsdef.html http://www.cise.ufl.edu/~mssz/Pascal-CGS2462/prog-dsn.html http://www.hfma.org/publications/business_caring_newsletter/exclusives/Supply+and+Inventory+Terms+Defined.htm http://mitsloan.mit.edu/omg/om-definition.php http://www.humtech.com/opm/grtl/ols/ols3.cfm http://www.gregvogl.net/courses/mis1/glossary.htm

    Read the article

  • SQL SERVER – Extending SQL Azure with Azure worker role – Guest Post by Paras Doshi

    - by pinaldave
    This is guest post by Paras Doshi. Paras Doshi is a research Intern at SolidQ.com and a Microsoft student partner. He is currently working in the domain of SQL Azure. SQL Azure is nothing but a SQL server in the cloud. SQL Azure provides benefits such as on demand rapid provisioning, cost-effective scalability, high availability and reduced management overhead. To see an introduction on SQL Azure, check out the post by Pinal here In this article, we are going to discuss how to extend SQL Azure with the Azure worker role. In other words, we will attempt to write a custom code and host it in the Azure worker role; the aim is to add some features that are not available with SQL Azure currently or features that need to be customized for flexibility. This way we extend the SQL Azure capability by building some solutions that run on Azure as worker roles. To understand Azure worker role, think of it as a windows service in cloud. Azure worker role can perform background processes, and to handle processes such as synchronization and backup, it becomes our ideal tool. First, we will focus on writing a worker role code that synchronizes SQL Azure databases. Before we do so, let’s see some scenarios in which synchronization between SQL Azure databases is beneficial: scaling out access over multiple databases enables us to handle workload efficiently As of now, SQL Azure database can be hosted in one of any six datacenters. By synchronizing databases located in different data centers, one can extend the data by enabling access to geographically distributed data Let us see some scenarios in which SQL server to SQL Azure database synchronization is beneficial To backup SQL Azure database on local infrastructure Rather than investing in local infrastructure for increased workloads, such workloads could be handled by cloud Ability to extend data to different datacenters located across the world to enable efficient data access from remote locations Now, let us develop cloud-based app that synchronizes SQL Azure databases. For an Introduction to developing cloud based apps, click here Now, in this article, I aim to provide a bird’s eye view of how a code that synchronizes SQL Azure databases look like and then list resources that can help you develop the solution from scratch. Now, if you newly add a worker role to the cloud-based project, this is how the code will look like. (Note: I have added comments to the skeleton code to point out the modifications that will be required in the code to carry out the SQL Azure synchronization. Note the placement of Setup() and Sync() function.) Click here (http://parasdoshi1989.files.wordpress.com/2011/06/code-snippet-1-for-extending-sql-azure-with-azure-worker-role1.pdf ) Enabling SQL Azure databases synchronization through sync framework is a two-step process. In the first step, the database is provisioned and sync framework creates tracking tables, stored procedures, triggers, and tables to store metadata to enable synchronization. This is one time step. The code for the same is put in the setup() function which is called once when the worker role starts. Now, the second step is continuous (or on demand) synchronization of SQL Azure databases by propagating changes between databases. This is done on a continuous basis by calling the sync() function in the while loop. The code logic to synchronize changes between SQL Azure databases should be put in the sync() function. Discussing the coding part step by step is out of the scope of this article. Therefore, let me suggest you a resource, which is given here. Also, note that before you start developing the code, you will need to install SYNC framework 2.1 SDK (download here). Further, you will reference some libraries before you start coding. Details regarding the same are available in the article that I just pointed to. You will be charged for data transfers if the databases are not in the same datacenter. For pricing information, go here Currently, a tool named DATA SYNC, which is built on top of sync framework, is available in CTP that allows SQL Azure <-> SQL server and SQL Azure <-> SQL Azure synchronization (without writing single line of code); however, in some cases, the custom code shown in this blogpost provides flexibility that is not available with Data SYNC. For instance, filtering is not supported in the SQL Azure DATA SYNC CTP2; if you wish to have such a functionality now, then you have the option of developing a custom code using SYNC Framework. Now, this code can be easily extended to synchronize at some schedule. Let us say we want the databases to get synchronized every day at 10:00 pm. This is what the code will look like now: (http://parasdoshi1989.files.wordpress.com/2011/06/code-snippet-2-for-extending-sql-azure-with-azure-worker-role.pdf) Don’t you think that by writing such a code, we are imitating the functionality provided by the SQL server agent for a SQL server? Think about it. We are scheduling our administrative task by writing custom code – in other words, we have developed a “Light weight SQL server agent for SQL Azure!” Since the SQL server agent is not currently available in cloud, we have developed a solution that enables us to schedule tasks, and thus we have extended SQL Azure with the Azure worker role! Now if you wish to track jobs, you can do so by storing this data in SQL Azure (or Azure tables). The reason is that Windows Azure is a stateless platform, and we will need to store the state of the job ourselves and the choice that you have is SQL Azure or Azure tables. Note that this solution requires custom code and also it is not UI driven; however, for now, it can act as a temporary solution until SQL server agent is made available in the cloud. Moreover, this solution does not encompass functionalities that a SQL server agent provides, but it does open up an interesting avenue to schedule some of the tasks such as backup and synchronization of SQL Azure databases by writing some custom code in the Azure worker role. Now, let us see one more possibility – i.e., running BCP through a worker role in Azure-hosted services and then uploading the backup files either locally or on blobs. If you upload it locally, then consider the data transfer cost. If you upload it to blobs residing in the same datacenter, then no transfer cost applies but the cost on blob size applies. So, before choosing the option, you need to evaluate your preferences keeping the cost associated with each option in mind. In this article, I have shown that Azure worker role solution could be developed to synchronize SQL Azure databases. Moreover, a light-weight SQL server agent for SQL Azure can be developed. Also we discussed the possibility of running BCP through a worker role in Azure-hosted services for backing up our precious SQL Azure data. Thus, we can extend SQL Azure with the Azure worker role. But remember: you will be charged for running Azure worker roles. So at the end of the day, you need to ask – am I willing to build a custom code and pay money to achieve this functionality? I hope you found this blog post interesting. If you have any questions/feedback, you can comment below or you can mail me at Paras[at]student-partners[dot]com Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Azure, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • FFmpeg extract clip - stream frame rate differs from container frame rate (x264, aac)

    - by fideli
    Summary H.264 video seems to have a really high frame rate that requires a scaling factor to the applied to the duration of video that I'm trying to extract (900x lower). Body I'm trying to extract a clip from a movie that I have in MP4 format (created using Handbrake). After trying mencoder and VLC, I decided to give FFmpeg a shot since it was the least troublesome when it came to copying the codecs. That is, compared to mencoder and VLC, the resulting file was still playable in QuickTime (I know about Perian, etc, I'm just trying to learn how all this works). Anyway, my command was as follows: ffmpeg -ss 01:15:51 -t 00:05:59 -i outofsight.mp4 \ -acodec copy -vcodec copy clip.mp4 During the copy, The following comes up: Seems stream 0 codec frame rate differs from container frame rate: 45000.00 (45000/1) -> 25.00 (25/1) Input #0, mov,mp4,m4a,3gp,3g2,mj2, from outofsight.mp4': Duration: 01:57:42.10, start: 0.000000, bitrate: 830 kb/s Stream #0.0(und): Video: h264, yuv420p, 720x384, 25 tbr, 22500 tbn, 45k tbc Stream #0.1(eng): Audio: aac, 48000 Hz, stereo, s16 Output #0, mp4, to 'out.mp4': Stream #0.0(und): Video: libx264, yuv420p, 720x384, q=2-31, 90k tbn, 22500 tbc Stream #0.1(eng): Audio: libfaac, 48000 Hz, stereo, s16 Stream mapping: Stream #0.0 -> #0.0 Stream #0.1 -> #0.1 Press [q] to stop encoding frame= 2591 fps=2349 q=-1.0 size= 8144kB time=101.60 bitrate= 656.7kbits/s … Instead of a 5:59 duration clip, I get the entire rest of the movie. So, to test this, I ran the ffmpeg command with -t 00:00:01. What I got was exactly a 15:00 minute clip. So I did some black box engineering and decided to scale my -t option by calculating what value to enter given that 1 second was interpreted as 900 s. For my desired 359 s clip, I calculated 0.399 s and so my ffmpeg command became: ffmpeg -ss 01:15.51 -t 00:00:00.399 -i outofsight.mp4 \ -acodec copy -vcodec copy clip.mp4 This works, but I have no idea why the duration is scaled by 900. Investigating further, each ffmpeg run has the line: Seems stream 0 codec frame rate differs from container frame rate: 45000.00 (45000/1) -> 25.00 (25/1) 45000/25 = 1800. Must be a relation somewhere. Somehow, the obscenely high frame rate is causing issues with the timing. How is that frame rate so high? The best part about this is that the resulting clip.mp4 has the exact same feature (due to the copied video codec), and taking further clips from this needs the same scaling for the -t duration option. Therefore, I've made it available for anyone willing to check this out. Appendix The preamble for ffmpeg on my system (built using MacPorts ffmpeg port): FFmpeg version 0.5, Copyright (c) 2000-2009 Fabrice Bellard, et al. configuration: --prefix=/opt/local --disable-vhook --enable-gpl --enable-postproc --enable-swscale --enable-avfilter --enable-avfilter-lavf --enable-libmp3lame --enable-libvorbis --enable-libtheora --enable-libdirac --enable-libschroedinger --enable-libfaac --enable-libfaad --enable-libxvid --enable-libx264 --mandir=/opt/local/share/man --enable-shared --enable-pthreads --cc=/usr/bin/gcc-4.2 --arch=x86_64 libavutil 49.15. 0 / 49.15. 0 libavcodec 52.20. 0 / 52.20. 0 libavformat 52.31. 0 / 52.31. 0 libavdevice 52. 1. 0 / 52. 1. 0 libavfilter 1. 4. 0 / 1. 4. 0 libswscale 1. 7. 1 / 1. 7. 1 libpostproc 51. 2. 0 / 51. 2. 0 built on Jan 4 2010 21:51:51, gcc: 4.2.1 (Apple Inc. build 5646) (dot 1) EDIT Not sure whether it was a bug or not, but it seems to be fixed now in my current version of ffmpeg, at least for this video (version 0.6.1 from MacPorts).

    Read the article

  • How to load the environment variables at boot time before X11 on Ubuntu Precise?

    - by Fnux
    Using Ubuntu Precise 64 bit, I'm facing a problem that I'm unable to solve and that I'll try to describe below: I'm using a console mode program (let's say abc) that uses Go, NodeJS, Java and Scala. In order for abc to work with these languages, I've to declare the following statements: a) within /etc/environment: PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/go/bin" CLASSPATH=$CLASSPATH:/usr/share/java/scala-library.jar b) within /etc/login.defs ENV_SUPATH PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/go/bin ENV_PATH PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/go/bin c) a) within /etc/sudoers: `# env_reset Defaults secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/go/bin"` Then, when I start abc from a terminal, all is fine and I can use any of the 4 languages described above. However, if I put a script within /etc/init.d that starts abc during the boot process (i.e. before to start the GUI), using Java from abc still is fine, but using Go, NodeJS or Scala doesn't work anymore. Then, I guess that during the boot process, the script within /etc/init.d that starts abc is executed before that the different environment variables set within /etc/sudoers, /etc/environment and /etc/login.defs are loaded. So, my question is: how to force the environment variables to be loaded before that my script starting abc is launched? Any help and advice on this topic would be trully appreciated. TIA. Cheers. Thanks again to Mark and Danila. Below is the current "abc" script file that I put within /etc/init.d `#! /bin/sh ### EDIT: ADD THIS VARS DEFINITIONS: PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/go/bin" CLASSPATH=$CLASSPATH:/usr/share/java/scala-library.jar "ENV_SUPATH PATH"="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/go/bin" "ENV_PATH PATH"="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/go/bin" "Defaults secure_path"="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/go/bin" ##### EXPORT this VARS so they are accessible to children:" export "PATH" "CLASSPATH" "ENV_SUPATH PATH" "ENV_PATH PATH" "Defaults secure_path" `### BEGIN INIT INFO `# Provides: abc `# Required-Start: $remote_fs $syslog `# Required-Stop: $remote_fs $syslog `# Default-Start: 2 3 4 5 `# Default-Stop: 0 1 6 `# Short-Description: abc initscript `# Description: This iniscript starts and stops abc `### END INIT INFO `# Author: Fnux, fnux.fl at gmail dot com `# Version: 1.2 `# Note: (edit ABC_PATH if abc isn't installed in /opt/abc) NAME=abc ABC_PATH=/opt/abc START="-d" STOP="-k" VERSION="-v" SCRIPTNAME=/etc/init.d/$NAME STARTMESG="\nStarting abc in deamon mode." UPMESG="\n$NAME is running." DOWNMESG="\n$NAME is not running." STATUS=`pidof $NAME` `# Exit if abc is not installed [ -x "$ABC_PATH/$NAME" ] || exit 0 case "$1" in start) echo $STARTMESG cd $ABC_PATH ./$NAME $START ;; stop) cd $ABC_PATH ./$NAME $STOP ;; status) if [ "$STATUS" > 0 ] ; then echo $UPMESG else echo $DOWNMESG fi ;; restart) cd $ABC_PATH ./$NAME $STOP echo $STARTMESG ./$NAME $START ;; version) cd $ABC_PATH ./$NAME $VERSION ;; *) echo "Usage: $SCRIPTNAME {start|status|restart|stop|version}" >&2 exit 3 ;; esac : So, where and how should I write the needed environment variables for: a) Go needs the following statements (ie: PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/go/bin" ENV_SUPATH PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/go/bin ENV_PATH PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/go/bin `# env_reset Defaults secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/go/bin") b) and Scala needs this one: (ie CLASSPATH=$CLASSPATH:/usr/share/java/scala-library.jar). TIA for an explanation how to do so. Cheers.

    Read the article

  • Extended URL php

    - by web.ask
    I have a problem with the extended url here in php. I'm not even sure if it's called extended url. I was wondering if someone can help me since i'm no good with PHP. I have a php page the careers.php it shows the career listing but when I click on Learn more link it goes to http(dot)localhost/houseofpatel/jobs/4/Career-4 but it outputs an "Object not found". Does xampplite have something to do with this? or is it just the codes? I also have the breadcrumb.php in here. Please see code below: session_start(); include("Breadcrumb.php"); $trail = new Breadcrumb(); $trail->add('Careers', $_SERVER['PHP_SELF'], 1); $trail -> output(); # REMOVE SPECIAL CHARACTERS $sPattern = array('/[^a-zA-Z0-9 -]/', '/[ -]+/', '/^-|-$/'); $sReplace = array('', '-', ''); mysql_query("SET CHARACTER_SET utf8"); //$query = "SELECT * FROM careers_mst WHERE status = 1 ORDER BY dateposted DESC LIMIT 0, 4"; if(!$_GET['jobs']){ $jobsPerPage = 4; }else{ $jobsPerPage = $_GET['jobs']; } if(!$_GET['page']){ $currPage = 1; $showMoreJobs = true; }else{ $currPage = $_GET['page']; $showMoreJobs = false; } $offset = ($currPage - 1 )* $jobsPerPage; $query = "SELECT * FROM careers_mst WHERE status = 1 ORDER BY dateposted"; $process = mysql_query($query); if(@mysql_num_rows($process) > 0){ $totalRows = @mysql_num_rows($process); $query2 = "SELECT * FROM careers_mst WHERE status = 1 ORDER BY dateposted DESC LIMIT $offset, $jobsPerPage"; $process2 = mysql_query($query2); if(@mysql_num_rows($process2) > 0){ $pagNav = pageNavigator($totalRows,$jobsPerPage,$currPage,''); while($row = @mysql_fetch_array($process2)){ $id = $row[0]; $title = stripslashes($row['title']); $title1 = preg_replace($sPattern, $sReplace, $title); $description = stripslashes($row['description']); $description1 = substr($description, 0, 200); $careers_list .= ' <div class="left" style="width:340px; padding-left:10px;"> <p><span class="blue"><strong>'.$title.'</strong></span></p> '.$description1.'... <p><a href="jobs/'.$id.'/'.$title1.'" title="Learn more">[ Learn More ]</a></p> </div>'; } /*$careers_list .= ' <div class="left" style="width: 100%; margin-top:15px;"> <p><a href="more-jobs" title="More Job Opennings">MORE JOB OPENNINGS >> </a></p> <div class="divider left"></div> </div>'; */ if($showMoreJobs){ if($totalRows > $jobsPerPage){ $currPage++; } $jobNav = "<a href=careers.php?page=$currPage title='More Job Opennings'>MORE JOB OPENNINGS >> </a>"; }else{ $jobNav = "$pagNav"; } $careers_list .= "<div class='left' style='width: 100%; margin-top:15px;'> <p>$jobNav</p> <div class='divider left'></div> </div>"; } }

    Read the article

  • CodePlex Daily Summary for Monday, July 01, 2013

    CodePlex Daily Summary for Monday, July 01, 2013Popular ReleasesQuickMon: Version 2.10.3: Mainly just a service release - no major changes. Toolbar buttons on main and config window can now be re-arrange (using ALT key) Added property to disable corrective scriptsDotNetNuke® IFrame: IFrame 04.05.00: New DNN6/7 Manifest file and Azure Compatibility.VidCoder: 1.5.2 Beta: Fixed crash on presets with an invalid bitrate.Roadkill - .NET Wiki engine: Roadkill v1.7: New features in 1.7: New file manager: Multiple file uploads Drag and drop uploads Delete folders (admins only) Delete files (admins only) (Experimental) Syntaxhighlighting custom variable (using https://github.com/alexgorbatchev/SyntaxHighlighter) - use [[[code lang=c#|your code here]]] (Experimental) MathJax custom variable - use [[[Mathjax]]] and $$your tex$$ on the page. Improved black bar theme Site speed improvements for Javascript/CSS files - now just two files files ea...Download Sharepoint Solution package: Release 4: version updated for SP2013WinRT XAML Toolkit: WinRT XAML Toolkit - 1.5: WinRT XAML Toolkit based on the Windows 8.0 and 8.1 Preview SDKs. Do not download the source code from here if you are looking for latest updates! You can download the latest source from the SOURCE CODE page. For compiled version use NuGet. You can add it to your project in Visual Studio by going to View/Other Windows/Package Manager Console and entering: PM> Install-Package winrtxamltoolkit Features Attachable Behaviors AwaitableUI extensions Composition library for visual tree rende...Gardens Point LEX: Gardens Point LEX version 1.2.1: The main distribution is a zip file. This contains the binary executable, documentation, source code and the examples. ChangesVersion 1.2.1 has new facilities for defining and manipulating character classes. These changes make the construction of large Unicode character classes more convenient. The runtime code for performing automaton backup has been re-implemented, and is now faster for scanners that need backup. Source CodeThe distribution contains a complete VS2010 project for the appli...ZXMAK2: Version 2.7.5.7: - fix TZX emulation (Bruce Lee, Zynaps) - fix ATM 16 colors for border - add memory module PROFI 512K; add PROFI V03 rom image; fix PROFI 3.XX configTwitter image Downloader: Twitter Image Downloader 2 with Installer: Application file with Install shield and Dot Net 4.0 redistributableUltimate Music Tagger: Ultimate Music Tagger 1.0.0.0: First release of Ultimate Music TaggerBlackJumboDog: Ver5.9.2: 2013.06.28 Ver5.9.2 (1) ??????????(????SMTP?????)?????????? (2) HTTPS???????????Outlook 2013 Add-In: Configuration Form: This new version includes the following changes: - Refactored code a bit. - Removing configuration from main form to gain more space to display items. - Moved configuration to separate form. You can click the little "gear" icon to access the configuration form (still very simple). - Added option to show past day appointments from the selected day (previous in time, that is). - Added some tooltips. You will have to uninstall the previous version (add/remove programs) if you had installed it ...Terminals: Version 3.0 - Release: Changes since version 2.0:Choose 100% portable or installed version Removed connection warning when running RDP 8 (Windows 8) client Fixed Active directory search Extended Active directory search by LDAP filters Fixed single instance mode when running on Windows Terminal server Merged usage of Tags and Groups Added columns sorting option in tables No UAC prompts on Windows 7 Completely new file persistence data layer New MS SQL persistence layer (Store data in SQL database)...NuGet: NuGet 2.6: Released June 26, 2013. Release notes: http://docs.nuget.org/docs/release-notes/nuget-2.6Python Tools for Visual Studio: 2.0 Beta: We’re pleased to announce the release of Python Tools for Visual Studio 2.0 Beta. Python Tools for Visual Studio (PTVS) is an open-source plug-in for Visual Studio which supports programming with the Python language. PTVS supports a broad range of features including CPython/IronPython, Edit/Intellisense/Debug/Profile, Cloud, HPC, IPython, and cross platform debugging support. For a quick overview of the general IDE experience, please watch this video: http://www.youtube.com/watch?v=TuewiStN...Player Framework by Microsoft: Player Framework for Windows 8 and WP8 (v1.3 beta): Preview: New MPEG DASH adaptive streaming plugin for Windows Azure Media Services Preview: New Ultraviolet CFF plugin. Preview: New WP7 version with WP8 compatibility. (source code only) Source code is now available via CodePlex Git Misc bug fixes and improvements: WP8 only: Added optional fullscreen and mute buttons to default xaml JS only: protecting currentTime from returning infinity. Some videos would cause currentTime to be infinity which could cause errors in plugins expectin...AssaultCube Reloaded: 2.5.8: SERVER OWNERS: note that the default maprot has changed once again. Linux has Ubuntu 11.10 32-bit precompiled binaries and Ubuntu 10.10 64-bit precompiled binaries, but you can compile your own as it also contains the source. If you are using Mac or other operating systems, please wait while we continue to try to package for those OSes. Or better yet, try to compile it. If it fails, download a virtual machine. The server pack is ready for both Windows and Linux, but you might need to compi...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.95: update parser to allow for CSS3 calc( function to nest. add recognition of -pponly (Preprocess-Only) switch in AjaxMinManifestTask build task. Fix crashing bug in EXE when processing a manifest file using the -xml switch and an error message needs to be displayed (like a missing input file). Create separate Clean and Bundle build tasks for working with manifest files (AjaxMinManifestCleanTask and AjaxMinBundleTask). Removed the IsCleanOperation from AjaxMinManifestTask -- use AjaxMinMan...VG-Ripper & PG-Ripper: VG-Ripper 2.9.44: changes NEW: Added Support for "ImgChili.net" links FIXED: Auto UpdaterDocument.Editor: 2013.25: What's new for Document.Editor 2013.25: Improved Spell Check support Improved User Interface Minor Bug Fix's, improvements and speed upsNew ProjectsAerCloud.net Client - Java, Linux & Windows: This project source code provides a step by step guide for using AerCloud.net Framework as a Service API. For more information please visit http://www.aercloudAmiClient – Asterisk Manager Interface (AMI) client based on the Rx Framework: Asterisk Manager Interface (AMI) client based on the Rx Frameworkbaidupan: cdcddddC#??????: C#??????ImageHelper: imagehelperIP switcher: IP switcher is a simple tool for switching settings, and store presets, on networkadapters.MastersProject: A MS project with a goal of creating a fully Code Contracts verified physics engine and a relatively simple game that uses it.Multiplatform card game: Example multipatform project.PhoneTools: A collection of tools designed to help developers create beautiful Windows Phone 8 apps.rodidexter: lllSharePoint 2013 List Item Encryption: This coding exercise project enables you to encrypt/decrypt list item text field in the browser using industry standard algorithms.tvaSoft: simulation, rotor dynamics, Finite Element Analisys, FEM, ODE, torsional vibration, flexural vibrationX3DML Project: X3DML is an xml-based markup language that defines rules for modeling 3D scenes from a tag-based document. It may be usefull in 3D web design and VR.zhuang-tfs: zhuang tfs

    Read the article

  • Underwriting in a New Frontier: Spurring Innovation

    - by [email protected]
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 st1\:*{behavior:url(#ieooui) } /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} Susan Keuer, product strategy manager for Oracle Insurance, shares her experiences and insight from the 2010 Association of Home Office Underwriters (AHOU) Annual Conference, April 11-14, in San Antonio, Texas    How can I be more innovative in underwriting?  It's a common question I hear from insurance carriers, producers and others, so it was no surprise that it was the key theme at the recent 2010 AHOU Annual Conference.  This year's event drew more than 900 insurance professionals involved in the underwriting process across life and annuities, property and casualty and reinsurance from around the globe, including the U.S., Canada, Australia, Bahamas, and more, to San Antonio - a Texas city where innovation transformed a series of downtown drainage canals into its premiere River Walk tourist destination.   CNN's Medical Correspondent Dr. Sanjay Gupta kicked off the conference with a phenomenal opening session that drove home the theme of the conference, "Underwriting in a New Frontier:  Spurring Innovation."   Drawing from his own experience as a neurosurgeon treating critically injured medical patients in the field in Iraq, Gupta inspired audience members to think outside the box during the underwriting process. He shared a compelling story of operating on a soldier who had suffered a head-related trauma in a field hospital.  With minimal supplies available Gupta used a Black and Decker saw to operate on the soldier's head and reduce pressure on his swelling brain. Drawing from this example, Gupta encouraged underwriters to think creatively, be innovative, and consider new tools and sources of information, such as social networking sites, during the underwriting process. So as you are looking at risk take into consideration all resources you have available.    Gupta also stressed the concept of IKIGAI - noting that individuals who believe that their life is worth living are less likely to die than are their counterparts without this belief.  How does one quantify this approach to life or thought process when evaluating risk?  Could this be something to consider as a "category" in the near future? How can this same belief in your own work spur innovation?   The role of technology was a hot topic of discussion throughout the conference.  Sessions delved into the latest in underwriting software to the rise of social media and how it is being increasingly integrated into underwriting process and solutions.  In one session a trio of panelists representing the carrier, producer and vendor communities stressed the importance to underwriters of leveraging new technology and the plethora of online information sources, which all could be used to accurately, honestly and consistently evaluate the risk throughout the underwriting process.   Another focused on the explosion of social media noting:  1.    Social media is growing exponentially - About eight percent of Americans used social media five years ago. Today about 46 percent of Americans do so, with 85 percent of financial services professionals using social media in their work.  2.    It will impact your business - Underwriters reconfirmed over and over that they are increasingly using "free" tools that are available in cyberspace in lieu of more costly solutions, such as inspection reports conducted by individuals in the field.  3.    Information is instantly available on the Web, anytime, anywhere - LinkedIn was mentioned as a way to connect to peers in the underwriting community and producers alike.  Many carriers and agents also are using Facebook to promote their company to customers - and as a point-of-entry to allow them to perform some functionality - such as accessing product marketing information versus directing users to go to the carrier's own proprietary website.  Other carriers have released their tight brand marketing to allow their producers to drive more business to their personal Facebook site where they offer innovative tools such as Application Capture or asking medical information in a more relaxed fashion.     Other key topics at the conference included the economy, ongoing industry consolidation, real-estate valuations as an asset and input into the underwriting process, and producer trends.  All stressed a "back to basics" approach for low cost, term products.   Finally, Connie Merritt, RN, PHN, entertained the large group of atttendees with audience-engaging insight on how to "Tame the Lions in Your Life - Dealing with Complainers, Bullies, Grump and Curmudgeon." Merritt noted "we are too busy for our own good." She shared how her overachieving personality had impacted her life.  Audience members then were asked to pick red, yellow, blue, or green shapes, without knowing that each one represented a specific personality trait.  For example, those who picked blue were the peacemakers. Those who choose yellow were social - the hint was to "Be Quiet Longer."  She then offered these "lion taming" steps:   1.    Admit It 2.    Accept It 3.    Let Go 4.    Be Present (which paralleled Gupta's IKIGAI concept)   When thinking about underwriting I encourage you to be present in the moment and think creatively, but don't be afraid to look ahead to the future and be an innovator.  I hope to see you at next year's AHOU Annual Conference, May 1-4, 2011 at The Mirage in Las Vegas, Nev.     Susan Keuer is the product strategy manager for new business underwriting.  She brings more than 20 years of insurance industry experience working with leading insurance carriers and technology companies to her role on the product strategy team for life/annuities solutions within the Oracle Insurance Global Business Unit  

    Read the article

  • C# Extension Methods - To Extend or Not To Extend...

    - by James Michael Hare
    I've been thinking a lot about extension methods lately, and I must admit I both love them and hate them. They are a lot like sugar, they taste so nice and sweet, but they'll rot your teeth if you eat them too much.   I can't deny that they aren't useful and very handy. One of the major components of the Shared Component library where I work is a set of useful extension methods. But, I also can't deny that they tend to be overused and abused to willy-nilly extend every living type.   So what constitutes a good extension method? Obviously, you can write an extension method for nearly anything whether it is a good idea or not. Many times, in fact, an idea seems like a good extension method but in retrospect really doesn't fit.   So what's the litmus test? To me, an extension method should be like in the movies when a person runs into their twin, separated at birth. You just know you're related. Obviously, that's hard to quantify, so let's try to put a few rules-of-thumb around them.   A good extension method should:     Apply to any possible instance of the type it extends.     Simplify logic and improve readability/maintainability.     Apply to the most specific type or interface applicable.     Be isolated in a namespace so that it does not pollute IntelliSense.     So let's look at a few examples in relation to these rules.   The first rule, to me, is the most important of all. Once again, it bears repeating, a good extension method should apply to all possible instances of the type it extends. It should feel like the long lost relative that should have been included in the original class but somehow was missing from the family tree.    Take this nifty little int extension, I saw this once in a blog and at first I really thought it was pretty cool, but then I started noticing a code smell I couldn't quite put my finger on. So let's look:       public static class IntExtensinos     {         public static int Seconds(int num)         {             return num * 1000;         }           public static int Minutes(int num)         {             return num * 60000;         }     }     This is so you could do things like:       ...     Thread.Sleep(5.Seconds());     ...     proxy.Timeout = 1.Minutes();     ...     Awww, you say, that's cute! Well, that's the problem, it's kitschy and it doesn't always apply (and incidentally you could achieve the same thing with TimeStamp.FromSeconds(5)). It's syntactical candy that looks cool, but tends to rot and pollute the code. It would allow things like:       total += numberOfTodaysOrders.Seconds();     which makes no sense and should never be allowed. The problem is you're applying an extension method to a logical domain, not a type domain. That is, the extension method Seconds() doesn't really apply to ALL ints, it applies to ints that are representative of time that you want to convert to milliseconds.    Do you see what I mean? The two problems, in a nutshell, are that a) Seconds() called off a non-time value makes no sense and b) calling Seconds() off something to pass to something that does not take milliseconds will be off by a factor of 1000 or worse.   Thus, in my mind, you should only ever have an extension method that applies to the whole domain of that type.   For example, this is one of my personal favorites:       public static bool IsBetween<T>(this T value, T low, T high)         where T : IComparable<T>     {         return value.CompareTo(low) >= 0 && value.CompareTo(high) <= 0;     }   This allows you to check if any IComparable<T> is within an upper and lower bound. Think of how many times you type something like:       if (response.Employee.Address.YearsAt >= 2         && response.Employee.Address.YearsAt <= 10)     {     ...     }     Now, you can instead type:       if(response.Employee.Address.YearsAt.IsBetween(2, 10))     {     ...     }     Note that this applies to all IComparable<T> -- that's ints, chars, strings, DateTime, etc -- and does not depend on any logical domain. In addition, it satisfies the second point and actually makes the code more readable and maintainable.   Let's look at the third point. In it we said that an extension method should fit the most specific interface or type possible. Now, I'm not saying if you have something that applies to enumerables, you create an extension for List, Array, Dictionary, etc (though you may have reasons for doing so), but that you should beware of making things TOO general.   For example, let's say we had an extension method like this:       public static T ConvertTo<T>(this object value)     {         return (T)Convert.ChangeType(value, typeof(T));     }         This lets you do more fluent conversions like:       double d = "5.0".ConvertTo<double>();     However, if you dig into Reflector (LOVE that tool) you will see that if the type you are calling on does not implement IConvertible, what you convert to MUST be the exact type or it will throw an InvalidCastException. Now this may or may not be what you want in this situation, and I leave that up to you. Things like this would fail:       object value = new Employee();     ...     // class cast exception because typeof(IEmployee) != typeof(Employee)     IEmployee emp = value.ConvertTo<IEmployee>();       Yes, that's a downfall of working with Convertible in general, but if you wanted your fluent interface to be more type-safe so that ConvertTo were only callable on IConvertibles (and let casting be a manual task), you could easily make it:         public static T ConvertTo<T>(this IConvertible value)     {         return (T)Convert.ChangeType(value, typeof(T));     }         This is what I mean by choosing the best type to extend. Consider that if we used the previous (object) version, every time we typed a dot ('.') on an instance we'd pull up ConvertTo() whether it was applicable or not. By filtering our extension method down to only valid types (those that implement IConvertible) we greatly reduce our IntelliSense pollution and apply a good level of compile-time correctness.   Now my fourth rule is just my general rule-of-thumb. Obviously, you can make extension methods as in-your-face as you want. I included all mine in my work libraries in its own sub-namespace, something akin to:       namespace Shared.Core.Extensions { ... }     This is in a library called Shared.Core, so just referencing the Core library doesn't pollute your IntelliSense, you have to actually do a using on Shared.Core.Extensions to bring the methods in. This is very similar to the way Microsoft puts its extension methods in System.Linq. This way, if you want 'em, you use the appropriate namespace. If you don't want 'em, they won't pollute your namespace.   To really make this work, however, that namespace should only include extension methods and subordinate types those extensions themselves may use. If you plant other useful classes in those namespaces, once a user includes it, they get all the extensions too.   Also, just as a personal preference, extension methods that aren't simply syntactical shortcuts, I like to put in a static utility class and then have extension methods for syntactical candy. For instance, I think it imaginable that any object could be converted to XML:       namespace Shared.Core     {         // A collection of XML Utility classes         public static class XmlUtility         {             ...             // Serialize an object into an xml string             public static string ToXml(object input)             {                 var xs = new XmlSerializer(input.GetType());                   // use new UTF8Encoding here, not Encoding.UTF8. The later includes                 // the BOM which screws up subsequent reads, the former does not.                 using (var memoryStream = new MemoryStream())                 using (var xmlTextWriter = new XmlTextWriter(memoryStream, new UTF8Encoding()))                 {                     xs.Serialize(xmlTextWriter, input);                     return Encoding.UTF8.GetString(memoryStream.ToArray());                 }             }             ...         }     }   I also wanted to be able to call this from an object like:       value.ToXml();     But here's the problem, if i made this an extension method from the start with that one little keyword "this", it would pop into IntelliSense for all objects which could be very polluting. Instead, I put the logic into a utility class so that users have the choice of whether or not they want to use it as just a class and not pollute IntelliSense, then in my extensions namespace, I add the syntactical candy:       namespace Shared.Core.Extensions     {         public static class XmlExtensions         {             public static string ToXml(this object value)             {                 return XmlUtility.ToXml(value);             }         }     }   So now it's the best of both worlds. On one hand, they can use the utility class if they don't want to pollute IntelliSense, and on the other hand they can include the Extensions namespace and use as an extension if they want. The neat thing is it also adheres to the Single Responsibility Principle. The XmlUtility is responsible for converting objects to XML, and the XmlExtensions is responsible for extending object's interface for ToXml().

    Read the article

  • MySQL Cluster 7.3 Labs Release – Foreign Keys Are In!

    - by Mat Keep
    0 0 1 1097 6254 Homework 52 14 7337 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} Summary (aka TL/DR): Support for Foreign Key constraints has been one of the most requested feature enhancements for MySQL Cluster. We are therefore extremely excited to announce that Foreign Keys are part of the first Labs Release of MySQL Cluster 7.3 – available for download, evaluation and feedback now! (Select the mysql-cluster-7.3-labs-June-2012 build) In this blog, I will attempt to discuss the design rationale, implementation, configuration and steps to get started in evaluating the first MySQL Cluster 7.3 Labs Release. Pace of Innovation It was only a couple of months ago that we announced the General Availability (GA) of MySQL Cluster 7.2, delivering 1 billion Queries per Minute, with 70x higher cross-shard JOIN performance, Memcached NoSQL key-value API and cross-data center replication.  This release has been a huge hit, with downloads and deployments quickly reaching record levels. The announcement of the first MySQL Cluster 7.3 Early Access lab release at today's MySQL Innovation Day event demonstrates the continued pace in Cluster development, and provides an opportunity for the community to evaluate and feedback on new features they want to see. What’s the Plan for MySQL Cluster 7.3? Well, Foreign Keys, as you may have gathered by now (!), and this is the focus of this first Labs Release. As with MySQL Cluster 7.2, we plan to publish a series of preview releases for 7.3 that will incrementally add new candidate features for a final GA release (subject to usual safe harbor statement below*), including: - New NoSQL APIs; - Features to automate the configuration and provisioning of multi-node clusters, on premise or in the cloud; - Performance and scalability enhancements; - Taking advantage of features in the latest MySQL 5.x Server GA. Design Rationale MySQL Cluster is designed as a “Not-Only-SQL” database. It combines attributes that enable users to blend the best of both relational and NoSQL technologies into solutions that deliver web scalability with 99.999% availability and real-time performance, including: Concurrent NoSQL and SQL access to the database; Auto-sharding with simple scale-out across commodity hardware; Multi-master replication with failover and recovery both within and across data centers; Shared-nothing architecture with no single point of failure; Online scaling and schema changes; ACID compliance and support for complex queries, across shards. Native support for Foreign Key constraints enables users to extend the benefits of MySQL Cluster into a broader range of use-cases, including: - Packaged applications in areas such as eCommerce and Web Content Management that prescribe databases with Foreign Key support. - In-house developments benefiting from Foreign Key constraints to simplify data models and eliminate the additional application logic needed to maintain data consistency and integrity between tables. Implementation The Foreign Key functionality is implemented directly within MySQL Cluster’s data nodes, allowing any client API accessing the cluster to benefit from them – whether using SQL or one of the NoSQL interfaces (Memcached, C++, Java, JPA or HTTP/REST.) The core referential actions defined in the SQL:2003 standard are implemented: CASCADE RESTRICT NO ACTION SET NULL In addition, the MySQL Cluster implementation supports the online adding and dropping of Foreign Keys, ensuring the Cluster continues to serve both read and write requests during the operation. An important difference to note with the Foreign Key implementation in InnoDB is that MySQL Cluster does not support the updating of Primary Keys from within the Data Nodes themselves - instead the UPDATE is emulated with a DELETE followed by an INSERT operation. Therefore an UPDATE operation will return an error if the parent reference is using a Primary Key, unless using CASCADE action, in which case the delete operation will result in the corresponding rows in the child table being deleted. The Engineering team plans to change this behavior in a subsequent preview release. Also note that when using InnoDB "NO ACTION" is identical to "RESTRICT". In the case of MySQL Cluster “NO ACTION” means “deferred check”, i.e. the constraint is checked before commit, allowing user-defined triggers to automatically make changes in order to satisfy the Foreign Key constraints. Configuration There is nothing special you have to do here – Foreign Key constraint checking is enabled by default. If you intend to migrate existing tables from another database or storage engine, for example from InnoDB, there are a couple of best practices to observe: 1. Analyze the structure of the Foreign Key graph and run the ALTER TABLE ENGINE=NDB in the correct sequence to ensure constraints are enforced 2. Alternatively drop the Foreign Key constraints prior to the import process and then recreate when complete. Getting Started Read this blog for a demonstration of using Foreign Keys with MySQL Cluster.  You can download MySQL Cluster 7.3 Labs Release with Foreign Keys today - (select the mysql-cluster-7.3-labs-June-2012 build) If you are new to MySQL Cluster, the Getting Started guide will walk you through installing an evaluation cluster on a singe host (these guides reflect MySQL Cluster 7.2, but apply equally well to 7.3) Post any questions to the MySQL Cluster forum where our Engineering team will attempt to assist you. Post any bugs you find to the MySQL bug tracking system (select MySQL Cluster from the Category drop-down menu) And if you have any feedback, please post them to the Comments section of this blog. Summary MySQL Cluster 7.2 is the GA, production-ready release of MySQL Cluster. This first Labs Release of MySQL Cluster 7.3 gives you the opportunity to preview and evaluate future developments in the MySQL Cluster database, and we are very excited to be able to share that with you. Let us know how you get along with MySQL Cluster 7.3, and other features that you want to see in future releases. * Safe Harbor Statement This information is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle.

    Read the article

  • CodePlex Daily Summary for Sunday, June 01, 2014

    CodePlex Daily Summary for Sunday, June 01, 2014Popular ReleasesSandcastle Help File Builder: Help File Builder and Tools v2014.5.31.0: General InformationIMPORTANT: On some systems, the content of the ZIP file is blocked and the installer may fail to run. Before extracting it, right click on the ZIP file, select Properties, and click on the Unblock button if it is present in the lower right corner of the General tab in the properties dialog. This release completes removal of the branding transformations and implements the new VS2013 presentation style that utilizes the new lightweight website format. Several breaking cha...Tooltip Web Preview: ToolTip Web Preview: Version 1.0Database Helper: Release 1.0.0.0: First Release of Database HelperCoMaSy: CoMaSy1.0.2: !Contact Management SystemImage View Slider: Image View Slider: This is a .NET component. We create this using VB.NET. Here you can use an Image Viewer with several properties to your application form. We wish somebody to improve freely. Try this out! Author : Steven Renaldo Antony Yustinus Arjuna Purnama Putra Andre Wijaya P Martin Lidau PBK GENAP 2014 - TI UKDWAspose for Apache POI: Missing Features of Apache POI WP - v 1.1: Release contain the Missing Features in Apache POI WP SDK in Comparison with Aspose.Words for dealing with Microsoft Word. What's New ?Following Examples: Insert Picture in Word Document Insert Comments Set Page Borders Mail Merge from XML Data Source Moving the Cursor Feedback and Suggestions Many more examples are yet to come here. Keep visiting us. Raise your queries and suggest more examples via Aspose Forums or via this social coding site.babelua: V1.5.6.0: V1.5.6.0 - 2014.5.30New feature: support quick-cocos2d-x project now; support text search in scripts folder now, you can use this function in Search Result Window;SEToolbox: 01.032.014 Release 1: Added fix when loading game Textures for icons causing 'Unable to read beyond the end of the stream'. Added new Resource Report, that displays all in game resources in a concise report. Added in temp directory cleaner, to keep excess files from building up. Fixed use of colors on the windows, to work better with desktop schemes. Adding base support for multilingual resources. This will allow loading of the Space Engineers resources to show localized names, and display localized date a...ClosedXML - The easy way to OpenXML: ClosedXML 0.71.2: More memory and performance improvements. Fixed an issue with pivot table field order.Fancontroller: Fancontroller: Initial releaseVi-AIO SearchBar: Vi – AIO Search Bar: Version 1.0Top Verses ( Ayat Emas ): Binary Top Verses: This one is the bin folder of the component. the .dll component is inside.Traditional Calendar Component: Traditional Calender Converter: Duta Wacana Christian University This file containing Traditional Calendar Component and Demo Aplication that using Traditional Calendar Component. This component made with .NET Framework 4 and the programming language is C# .SQLSetupHelper: 1.0.0.0: First Stable Version of SQL SetupComposite Iconote: Composite Iconote: This is a composite has been made by Microsoft Visual Studio 2013. Requirement: To develop this composite or use this component in your application, your computer must have .NET framework 4.5 or newer.Magick.NET: Magick.NET 6.8.9.101: Magick.NET linked with ImageMagick 6.8.9.1. Breaking changes: - Int/short Set methods of WritablePixelCollection are now unsigned. - The Q16 build no longer uses HDRI, switch to the new Q16-HDRI build if you need HDRI.fnr.exe - Find And Replace Tool: 1.7: Bug fixes Refactored logic for encoding text values to command line to handle common edge cases where find/replace operation works in GUI but not in command line Fix for bug where selection in Encoding drop down was different when generating command line in some cases. It was reported in: https://findandreplace.codeplex.com/workitem/34 Fix for "Backslash inserted before dot in replacement text" reported here: https://findandreplace.codeplex.com/discussions/541024 Fix for finding replacing...VG-Ripper & PG-Ripper: VG-Ripper 2.9.59: changes NEW: Added Support for 'GokoImage.com' links NEW: Added Support for 'ViperII.com' links NEW: Added Support for 'PixxxView.com' links NEW: Added Support for 'ImgRex.com' links NEW: Added Support for 'PixLiv.com' links NEW: Added Support for 'imgsee.me' links NEW: Added Support for 'ImgS.it' linksToolbox for Dynamics CRM 2011/2013: XrmToolBox (v1.2014.5.28): XrmToolbox improvement XrmToolBox updates (v1.2014.5.28)Fix connecting to a connection with custom authentication without saved password Tools improvement New tool!Solution Components Mover (v1.2014.5.22) Transfer solution components from one solution to another one Import/Export NN relationships (v1.2014.3.7) Allows you to import and export many to many relationships Tools updatesAttribute Bulk Updater (v1.2014.5.28) Audit Center (v1.2014.5.28) View Layout Replicator (v1.2014.5.28) Scrip...Microsoft Ajax Minifier: Microsoft Ajax Minifier 5.10: Fix for Issue #20875 - echo switch doesn't work for CSS CSS should honor the SASS source-file comments JS should allow multi-line comment directivesNew ProjectsCet MicroWPF: WPF-like library for simple graphic-UI application using Netduino (Plus) 2 and the FTDI FT800 Eve board.Fakemons: Some Fakmons, powered by XML, XSLT, CSS and JavascriptFling OS: Fling OS is a C# operating system project aiming to create a new, managed operating system from the ground up.MudRoom: Experimental tool sets in mud parsing and area definitionOOP-2112110158: My name's NgocDungRoslynResearch: Roslyn ResearchTHD - Control de Usuarios: control de usuarios y permisosWPF Kinect User Controls: WPF Kinect User Control project provide simple Tilt and Skeleton Tracking Parameter Controls.

    Read the article

  • Beginner’s Guide to Flock, the Social Media Browser

    - by Asian Angel
    Are you wanting a browser that can work as a social hub from the first moment that you start it up? If you love the idea of a browser that is ready to go out of the box then join us as we look at Flock. During the Install Process When you are installing Flock there are two install windows that you should watch for. The first one lets you choose between the “Express Setup & Custom Setup”. We recommend the “Custom Setup”. Once you have selected the “Custom Setup” you can choose which of the following options will enabled. Notice the “anonymous usage statistics” option at the bottom…you can choose to leave this enabled or disable it based on your comfort level. The First Look When you start Flock up for the first time it will open with three tabs. All three are of interest…especially if this is your first time using Flock. With the first tab you can jump right into “logging in/activating” favorite social services within Flock. This page is set to display each time that you open Flock unless you deselect the option in the lower left corner. The second tab provides a very nice overview of Flock and its’ built-in social management power. The third and final page can be considered a “Personal Page”. You can make some changes to the content displayed for quick and easy access and/or monitoring “Twitter Search, Favorite Feeds, Favorite Media, Friend Activity, & Favorite Sites”. Use the “Widget Menu” in the upper left corner to select the “Personal Page Components” that you would like to use. In the upper right corner there is a built-in “Search Bar” and buttons for “Posting to Your Blog & Uploading Media”. To help personalize the “My World Page” just a bit more you can even change the text to your name or whatever best suits your needs. The Flock Toolbar The “Flock Toolbar” is full of social account management goodness. In order from left to right the buttons are: My World (Homepage), Open People Sidebar, Open Media Bar, Open Feeds Sidebar, Webmail, Open Favorites Sidebar, Open Accounts and Services Sidebar, Open Web Clipboard Sidebar, Open Blog Editor, & Open Photo Uploader. The buttons will be “highlighted” with a blue background to help indicate which area you are in. The first area will display a listing of people that you are watching/following at the services shown here. Clicking on the “Media Bar Button” will display the following “Media Slider Bar” above your “Tab Bar”. Notice that there is a built-in “Search Bar” on the right side. Any photos, etc. clicked on will be opened in the currently focused tab below the “Media Bar”. Here is a listing of the “Media Streams” available for viewing. By default Flock will come with a small selection of pre-subscribed RSS Feeds. You can easily unsubscribe, rearrange, add custom folders, or non-categorized feeds as desired. RSS Feeds subscribed to here can be viewed combined together as a single feed (clickable links) in the “My World Page”. or can be viewed individually in a new tab. Very nice! Next on the “Flock Toolbar is the “Webmail Button”. You can set up access to your favorite “Yahoo!, Gmail, & AOL Mail” accounts from here. The “Favorites Sidebar” combines your “Browser History & Bookmarks” into one convenient location. The “Accounts and Services Sidebar” gives you quick and easy access to get logged into your favorite social accounts. Clicking on any of the links will open that particular service’s login page in a new tab. Want to store items such as photos, links, and text to add into a blog post or tweet later on? Just drag and drop them into the “Web Clipboard Sidebar” for later access. Clicking on the “Blog Editor Button” will open up a separate blogging window to compose your posts in. If you have not logged into or set up an account yet in Flock you will see the following message window. The “Blogging Window”…nice, simple, and straightforward. If you are not already logged into your photo account(s) then you will see the following message window when you click on the “Photo Uploader Button”. Clicking “OK” will open the “Accounts and Services Sidebar” with compatible photo services highlighted in a light yellow color. Log in to your favorite service to start uploading all those great images. After Setting Up Here is what our browser looked like after setting up some of our favorite services. The Twitter feed is certainly looking nice and easy to read through… Some tweaking in the “RSS Feeds Sidebar” makes for a perfect reading experience. Keeping up with our e-mail is certainly easy to do too. A look back at the “Accounts and Services Sidebar” shows that all of our accounts are actively logged in (green dot on the right side). Going back to our “My World Page” you can see how nice everything looks for monitoring our “Friend Activity & Favorite Feeds”. Moving on to regular browsing everything is looking very good… Flock is a perfect choice for anyone wanting a browser and social hub all built into a single app. Conclusion Anyone who loves keeping up with their favorite social services while browsing will find using Flock to be a wonderful experience. You literally get the best of both worlds with this browser. Links Download Flock The Official Flock Extensions Homepage The Official Flock Toolbar Homepage Similar Articles Productive Geek Tips Add Color Coding to Windows 7 Media Center Program GuideAdd Social Bookmarking (Digg This!) Links to your Wordpress BlogHow to use an ISO image on Ubuntu LinuxAdvertise on How-To GeekFixing When Windows Media Player Library Won’t Let You Add Files TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7 Microsoft’s “How Do I ?” Videos Home Networks – How do they look like & the problems they cause

    Read the article

  • Routes on a sphere surface - Find geodesic?

    - by CaNNaDaRk
    I'm working with some friends on a browser based game where people can move on a 2D map. It's been almost 7 years and still people play this game so we are thinking of a way to give them something new. Since then the game map was a limited plane and people could move from (0, 0) to (MAX_X, MAX_Y) in quantized X and Y increments (just imagine it as a big chessboard). We believe it's time to give it another dimension so, just a couple of weeks ago, we began to wonder how the game could look with other mappings: Unlimited plane with continous movement: this could be a step forward but still i'm not convinced. Toroidal World (continous or quantized movement): sincerely I worked with torus before but this time I want something more... Spherical world with continous movement: this would be great! What we want Users browsers are given a list of coordinates like (latitude, longitude) for each object on the spherical surface map; browsers must then show this in user's screen rendering them inside a web element (canvas maybe? this is not a problem). When people click on the plane we convert the (mouseX, mouseY) to (lat, lng) and send it to the server which has to compute a route between current user's position to the clicked point. What we have We began writing a Java library with many useful maths to work with Rotation Matrices, Quaternions, Euler Angles, Translations, etc. We put it all together and created a program that generates sphere points, renders them and show them to the user inside a JPanel. We managed to catch clicks and translate them to spherical coords and to provide some other useful features like view rotation, scale, translation etc. What we have now is like a little (very little indeed) engine that simulates client and server interaction. Client side shows points on the screen and catches other interactions, server side renders the view and does other calculus like interpolating the route between current position and clicked point. Where is the problem? Obviously we want to have the shortest path to interpolate between the two route points. We use quaternions to interpolate between two points on the surface of the sphere and this seemed to work fine until i noticed that we weren't getting the shortest path on the sphere surface: We though the problem was that the route is calculated as the sum of two rotations about X and Y axis. So we changed the way we calculate the destination quaternion: We get the third angle (the first is latitude, the second is longitude, the third is the rotation about the vector which points toward our current position) which we called orientation. Now that we have the "orientation" angle we rotate Z axis and then use the result vector as the rotation axis for the destination quaternion (you can see the rotation axis in grey): What we got is the correct route (you can see it lays on a great circle), but we get to this ONLY if the starting route point is at latitude, longitude (0, 0) which means the starting vector is (sphereRadius, 0, 0). With the previous version (image 1) we don't get a good result even when startin point is 0, 0, so i think we're moving towards a solution, but the procedure we follow to get this route is a little "strange" maybe? In the following image you get a view of the problem we get when starting point is not (0, 0), as you can see starting point is not the (sphereRadius, 0, 0) vector, and as you can see the destination point (which is correctly drawn!) is not on the route. The magenta point (the one which lays on the route) is the route's ending point rotated about the center of the sphere of (-startLatitude, 0, -startLongitude). This means that if i calculate a rotation matrix and apply it to every point on the route maybe i'll get the real route, but I start to think that there's a better way to do this. Maybe I should try to get the plane through the center of the sphere and the route points, intersect it with the sphere and get the geodesic? But how? Sorry for being way too verbose and maybe for incorrect English but this thing is blowing my mind! EDIT: This code version is related to the first image: public void setRouteStart(double lat, double lng) { EulerAngles tmp = new EulerAngles ( Math.toRadians(lat), 0, -Math.toRadians(lng)); //set route start Quaternion qtStart.setInertialToObject(tmp); //do other stuff like drawing start point... } public void impostaDestinazione(double lat, double lng) { EulerAngles tmp = new AngoliEulero( Math.toRadians(lat), 0, -Math.toRadians(lng)); qtEnd.setInertialToObject(tmp); //do other stuff like drawing dest point... } public V3D interpolate(double totalTime, double t) { double _t = t/totalTime; Quaternion q = Quaternion.Slerp(qtStart, qtEnd, _t); RotationMatrix.inertialQuatToIObject(q); V3D p = matInt.inertialToObject(V3D.Xaxis.scale(sphereRadius)); //other stuff, like drawing point ... return p; } //mostly taken from a book! public static Quaternion Slerp(Quaternion q0, Quaternion q1, double t) { double cosO = q0.dot(q1); double q1w = q1.w; double q1x = q1.x; double q1y = q1.y; double q1z = q1.z; if (cosO < 0.0f) { q1w = -q1w; q1x = -q1x; q1y = -q1y; q1z = -q1z; cosO = -cosO; } double sinO = Math.sqrt(1.0f - cosO*cosO); double O = Math.atan2(sinO, cosO); double oneOverSinO = 1.0f / senoOmega; k0 = Math.sin((1.0f - t) * O) * oneOverSinO; k1 = Math.sin(t * O) * oneOverSinO; // Interpolate return new Quaternion( k0*q0.w + k1*q1w, k0*q0.x + k1*q1x, k0*q0.y + k1*q1y, k0*q0.z + k1*q1z ); } A little dump of what i get (again check image 1): Route info: Sphere radius and center: 200,000, (0.0, 0.0, 0.0) Route start: lat 0,000 °, lng 0,000 ° @v: (200,000, 0,000, 0,000), |v| = 200,000 Route end: lat 30,000 °, lng 30,000 ° @v: (150,000, 86,603, 100,000), |v| = 200,000 Qt dump: (w, x, y, z), rot. angle°, (x, y, z) rot. axis Qt start: (1,000, 0,000, -0,000, 0,000); 0,000 °; (1,000, 0,000, 0,000) Qt end: (0,933, 0,067, -0,250, 0,250); 42,181 °; (0,186, -0,695, 0,695) Route start: lat 30,000 °, lng 10,000 ° @v: (170,574, 30,077, 100,000), |v| = 200,000 Route end: lat 80,000 °, lng -50,000 ° @v: (22,324, -26,604, 196,962), |v| = 200,000 Qt dump: (w, x, y, z), rot. angle°, (x, y, z) rot. axis Qt start: (0,962, 0,023, -0,258, 0,084); 31,586 °; (0,083, -0,947, 0,309) Qt end: (0,694, -0,272, -0,583, -0,324); 92,062 °; (-0,377, -0,809, -0,450)

    Read the article

  • CodePlex Daily Summary for Tuesday, July 02, 2013

    CodePlex Daily Summary for Tuesday, July 02, 2013Popular ReleasesMastersign.Expressions: Mastersign.Expressions v0.4.2: added support for if(<cond>, <true-part>, <false-part>) fixed multithreading issue with rand() improved demo applicationNB_Store - Free DotNetNuke Ecommerce Catalog Module: NB_Store v2.3.6 Rel0: v2.3.6 Is now DNN6 and DNN7 compatible Important : During update this install with overwrite the menu.xml setting, if you have changed this then make a backup before you upgrade and reapply your changes after the upgrade. Please view the following documentation if you are installing and configuring this module for the first time System Requirements Skill requirements Downloads and documents Step by step guide to a working store Please ask all questions in the Discussions tab. Document.Editor: 2013.26: What's new for Document.Editor 2013.26: New Insert Chart Improved User Interface Minor Bug Fix's, improvements and speed upsWsus Package Publisher: Release V1.2.1307.01: Fix an issue in the UI, approvals are not shown correctly in the 'Report' tabDirectX Tool Kit: July 2013: July 1, 2013 VS 2013 Preview projects added and updates for DirectXMath 3.05 vectorcall Added use of sRGB WIC metadata for JPEG, PNG, and TIFF SaveToWIC functions updated with new optional setCustomProps parameter and error check with optional targetFormatCore Server 2012 Powershell Script Hyper-v Manager: new_root.zip: Verison 1.0JSON Toolkit: JSON Toolkit 4.1.736: Improved strinfigy performance New serializing feature New anonymous type support in constructorsDotNetNuke® IFrame: IFrame 04.05.00: New DNN6/7 Manifest file and Azure Compatibility.VidCoder: 1.5.2 Beta: Fixed crash on presets with an invalid bitrate.Gardens Point LEX: Gardens Point LEX version 1.2.1: The main distribution is a zip file. This contains the binary executable, documentation, source code and the examples. ChangesVersion 1.2.1 has new facilities for defining and manipulating character classes. These changes make the construction of large Unicode character classes more convenient. The runtime code for performing automaton backup has been re-implemented, and is now faster for scanners that need backup. Source CodeThe distribution contains a complete VS2010 project for the appli...ZXMAK2: Version 2.7.5.7: - fix TZX emulation (Bruce Lee, Zynaps) - fix ATM 16 colors for border - add memory module PROFI 512K; add PROFI V03 rom image; fix PROFI 3.XX configTwitter image Downloader: Twitter Image Downloader 2 with Installer: Application file with Install shield and Dot Net 4.0 redistributableUltimate Music Tagger: Ultimate Music Tagger 1.0.0.0: First release of Ultimate Music TaggerBlackJumboDog: Ver5.9.2: 2013.06.28 Ver5.9.2 (1) ??????????(????SMTP?????)?????????? (2) HTTPS???????????Outlook 2013 Add-In: Configuration Form: This new version includes the following changes: - Refactored code a bit. - Removing configuration from main form to gain more space to display items. - Moved configuration to separate form. You can click the little "gear" icon to access the configuration form (still very simple). - Added option to show past day appointments from the selected day (previous in time, that is). - Added some tooltips. You will have to uninstall the previous version (add/remove programs) if you had installed it ...Terminals: Version 3.0 - Release: Changes since version 2.0:Choose 100% portable or installed version Removed connection warning when running RDP 8 (Windows 8) client Fixed Active directory search Extended Active directory search by LDAP filters Fixed single instance mode when running on Windows Terminal server Merged usage of Tags and Groups Added columns sorting option in tables No UAC prompts on Windows 7 Completely new file persistence data layer New MS SQL persistence layer (Store data in SQL database)...NuGet: NuGet 2.6: Released June 26, 2013. Release notes: http://docs.nuget.org/docs/release-notes/nuget-2.6Python Tools for Visual Studio: 2.0 Beta: We’re pleased to announce the release of Python Tools for Visual Studio 2.0 Beta. Python Tools for Visual Studio (PTVS) is an open-source plug-in for Visual Studio which supports programming with the Python language. PTVS supports a broad range of features including CPython/IronPython, Edit/Intellisense/Debug/Profile, Cloud, HPC, IPython, and cross platform debugging support. For a quick overview of the general IDE experience, please watch this video: http://www.youtube.com/watch?v=TuewiStN...Player Framework by Microsoft: Player Framework for Windows 8 and WP8 (v1.3 beta): Preview: New MPEG DASH adaptive streaming plugin for Windows Azure Media Services Preview: New Ultraviolet CFF plugin. Preview: New WP7 version with WP8 compatibility. (source code only) Source code is now available via CodePlex Git Misc bug fixes and improvements: WP8 only: Added optional fullscreen and mute buttons to default xaml JS only: protecting currentTime from returning infinity. Some videos would cause currentTime to be infinity which could cause errors in plugins expectin...AssaultCube Reloaded: 2.5.8: SERVER OWNERS: note that the default maprot has changed once again. Linux has Ubuntu 11.10 32-bit precompiled binaries and Ubuntu 10.10 64-bit precompiled binaries, but you can compile your own as it also contains the source. If you are using Mac or other operating systems, please wait while we continue to try to package for those OSes. Or better yet, try to compile it. If it fails, download a virtual machine. The server pack is ready for both Windows and Linux, but you might need to compi...New ProjectsALM Rangers DevOps Tooling and Guidance: Practical tooling and guidance that will enable teams to realize a faster deployment based on continuous feedback.Core Server 2012 Powershell Script Hyper-v Manager: Free core Server 2012 powershell scripts and batch files that replace the non-existent hyper-v manager, vmconnect and mstsc.Enhanced Deployment Service (EDS): EDS is a web service based utility designed to extend the deployment capabilities of administrators with the Microsoft Deployment Toolkit.ExtendedDialogBox: Libreria DialogBoxJazdy: This project is here only because we wanted to take advantage of a public git server.Mon Examen: This web interface is meant to make examinationsneet: summaryOrchard Multi-Choice Voting: A multiple choice voting Orchard module.Particle Swarm Optimization Solving Quadratic Assignment Problem: This project is submitted for the solving of QAP using PSO algorithms with addition of some modification Porjects: 23123123PPL Power Pack: PPL Power PackProperty Builder: Visual Studio tool for speeding up process of coding class properties getters and setters.RedRuler for Redline: I tried some on-screen rulers, none of them help me measure the UI element quickly based on the Redline. So I decided to created this handy RedRuler tool. Royale Living: Mahindra Royale Community PortalSearch and booking Hotel or Tours: Ð? án nghiên c?u c?a sinh viên tdt theo mô hình mvc 4SystemBuilder.Show: This tool is a helper after you create your project in visual studio to create the respective objects and interface. TalentDesk: new ptojectTcmplex: The Training Center teaches many different kind of course such as English, French, Computer hardware and computer softwareTFS Reporting Guide: Provides guidance and samples to enable TFS users to generate reports based on WIT data.Umbraco AdaptiveImages: Adaptive Images Package for UmbracoVirtualNet - A ILcode interpreter/emulator written in C++/Assembly: VirtualNet is a interpreter/emulator for running .net code in native without having to install the .Net FrameWorkVisual Blocks: Visual Blocks ????IDE ????? ??????? ????? ????/?? Visual Studio and Cloud Based Mobile Device Testing: Practical guidance enabling field to remove blockers to adoption and to use and extend the Perfecto Mobile Cloud Device testing within the context of VS.Windows 8 Time Picker for Windows Phone: A Windows Phone implementation of the Time Picker control found in the Windows 8.1 Alarms app.???? - SmallBasic?: ?????????

    Read the article

  • 6 Facts About GlassFish Announcement

    - by Bruno.Borges
    Since Oracle announced the end of commercial support for future Oracle GlassFish Server versions, the Java EE world has started wondering what will happen to GlassFish Server Open Source Edition. Unfortunately, there's a lot of misleading information going around. So let me clarify some things with facts, not FUD. Fact #1 - GlassFish Open Source Edition is not dead GlassFish Server Open Source Edition will remain the reference implementation of Java EE. The current trunk is where an implementation for Java EE 8 will flourish, and this will become the future GlassFish 5.0. Calling "GlassFish is dead" does no good to the Java EE ecosystem. The GlassFish Community will remain strong towards the future of Java EE. Without revenue-focused mind, this might actually help the GlassFish community to shape the next version, and set free from any ties with commercial decisions. Fact #2 - OGS support is not over As I said before, GlassFish Server Open Source Edition will continue. Main change is that there will be no more future commercial releases of Oracle GlassFish Server. New and existing OGS 2.1.x and 3.1.x commercial customers will continue to be supported according to the Oracle Lifetime Support Policy. In parallel, I believe there's no other company in the Java EE business that offers commercial support to more than one build of a Java EE application server. This new direction can actually help customers and partners, simplifying decision through commercial negotiations. Fact #3 - WebLogic is not always more expensive than OGS Oracle GlassFish Server ("OGS") is a build of GlassFish Server Open Source Edition bundled with a set of commercial features called GlassFish Server Control and license bundles such as Java SE Support. OGS has at the moment of this writing the pricelist of U$ 5,000 / processor. One information that some bloggers are mentioning is that WebLogic is more expensive than this. Fact 3.1: it is not necessarily the case. The initial edition of WebLogic is called "Standard Edition" and falls into a policy where some “Standard Edition” products are licensed on a per socket basis. As of current pricelist, US$ 10,000 / socket. If you do the math, you will realize that WebLogic SE can actually be significantly more cost effective than OGS, and a customer can save money if running on a CPU with 4 cores or more for example. Quote from the price list: “When licensing Oracle programs with Standard Edition One or Standard Edition in the product name (with the exception of Java SE Support, Java SE Advanced, and Java SE Suite), a processor is counted equivalent to an occupied socket; however, in the case of multi-chip modules, each chip in the multi-chip module is counted as one occupied socket.” For more details speak to your Oracle sales representative - this is clearly at list price and every customer typically has a relationship with Oracle (like they do with other vendors) and different contractual details may apply. And although OGS has always been production-ready for Java EE applications, it is no secret that WebLogic has always been more enterprise, mission critical application server than OGS since BEA. Different editions of WLS provide features and upgrade irons like the WebLogic Diagnostic Framework, Work Managers, Side by Side Deployment, ADF and TopLink bundled license, Web Tier (Oracle HTTP Server) bundled licensed, Fusion Middleware stack support, Oracle DB integration features, Oracle RAC features (such as GridLink), Coherence Management capabilities, Advanced HA (Whole Service Migration and Server Migration), Java Mission Control, Flight Recorder, Oracle JDK support, etc. Fact #4 - There’s no major vendor supporting community builds of Java EE app servers There are no major vendors providing support for community builds of any Open Source application server. For example, IBM used to provide community support for builds of Apache Geronimo, not anymore. Red Hat does not commercially support builds of WildFly and if I remember correctly, never supported community builds of former JBoss AS. Oracle has never commercially supported GlassFish Server Open Source Edition builds. Tomitribe appears to be the exception to the rule, offering commercial support for Apache TomEE. Fact #5 - WebLogic and GlassFish share several Java EE implementations It has been no secret that although GlassFish and WebLogic share some JSR implementations (as stated in the The Aquarium announcement: JPA, JSF, WebSockets, CDI, Bean Validation, JAX-WS, JAXB, and WS-AT) and WebLogic understands GlassFish deployment descriptors, they are not from the same codebase. Fact #6 - WebLogic is not for GlassFish what JBoss EAP is for WildFly WebLogic is closed-source offering. It is commercialized through a license-based plus support fee model. OGS although from an Open Source code, has had the same commercial model as WebLogic. Still, one cannot compare GlassFish/WebLogic to WildFly/JBoss EAP. It is simply not the same case, since Oracle has had two different products from different codebases. The comparison should be limited to GlassFish Open Source / Oracle GlassFish Server versus WildFly / JBoss EAP. But the message now is much clear: Oracle will commercially support only the proprietary product WebLogic, and invest on GlassFish Server Open Source Edition as the reference implementation for the Java EE platform and future Java EE 8, as a developer-friendly community distribution, and encourages community participation through Adopt a JSR and contributions to GlassFish. In comparison Oracle's decision has pretty much the same goal as to when IBM killed support for Websphere Community Edition; and to when Red Hat decided to change the name of JBoss Community Edition to WildFly, simplifying and clarifying marketing message and leaving the commercial field wide open to JBoss EAP only. Oracle can now, as any other vendor has already been doing, focus on only one commercial offer. Some users are saying they will now move to WildFly, but it is important to note that Red Hat does not offer commercial support for WildFly builds. Although the future JBoss EAP versions will come from the same codebase as WildFly, the builds will definitely not be the same, nor sharing 100% of their functionalities and bug fixes. This means there will be no company running a WildFly build in production with support from Red Hat. This discussion has also raised an important and interesting information: Oracle offers a free for developers OTN License for WebLogic. For other environments this is different, but please note this is the same policy Red Hat applies to JBoss EAP, as stated in their download page and terms. Oracle had the same policy for OGS. TL;DR; GlassFish Server Open Source Edition isn’t dead. Current and new OGS 2.x/3.x customers will continue to have support (respecting LSP). WebLogic is not necessarily more expensive than OGS. Oracle will focus on one commercially supported Java EE application server, like other vendors also limit themselves to support one build/product only. Community builds are hardly supported. Commercially supported builds of Open Source products are not exactly from the same codebase as community builds. What's next for GlassFish and the Java EE community? There are conversations in place to tackle some of the community desires, most of them stated by Markus Eisele in his blog post. We will keep you posted.

    Read the article

  • Emtel Knowledge Series - Q2/2014

    From Cyber Island to Smart Mauritius Cyber Island? Smart Mauritius? - What is Emtel talking about? "With the majority of the population living in urban environments today, the concept of "Smart Cities" has become an urgent necessity. "Smart Cities" refer to an urban transformation which, by using latest ICT technologies makes cities more efficient. Many Governments are setting out ambitious plans to build the cities of the future based on massive connectivity, high bandwidth communications, intelligent sensors and analysis of huge volumes of data. Various researches have shown four key enablers for smart city success - Government leadership, suitable technology infrastructure, solid public-private partnerships and engaged citizens. It is around these enabling factors that telecoms companies can play a vital role in assisting governments to deliver on the smart city vision." The Emtel Knowledge Series goes in compliance with Emtel's 25th anniversary celebrations throughout the year and the master of ceremony, Kim Andersen, mentioned that there will be more upcoming events on a quarterly base. As a representative of the Mauritius Software Craftsmanship Community (MSCC) there was absolutely no hesitation to join in again. Following my visit to the first Emtel Knowledge Series workshop back in February this year, it was great to have another opportunity to meet and exchange with technology experts. But quite frankly what is it with those buzz words... As far as I remember and how it was mentioned "Cyber Island" is an old initiative from around 2005/2006 which has been refreshed in 2010. It implies the empowerment of Information & Communication Technologies (ICT) as an essential factor of growth by the government here in Mauritius. Actually, the first promotional period of Cyber Island brought me here but that's another story. The venue and its own problems Like last time the event was organised and held at the Conference Hall at Cyber Tower I in Ebene. As I've been working there for some years, I know about the frustrating situation of finding a proper parking. So, does Smart Island include better solutions for the search of parking spaces? Maybe, let's see whether I will be able to answer that question at the end of the article. Anyway, after circling around the tower almost two times, I finally got a decent space to put the car, without risking to get a ticket or damage actually. International speakers and their experience Once again, Emtel did a great job to get international expertise onto the stage to share their experience and vision on this kind of embarkment. Personally, I really appreciated the fact they were speakers of global reach and could provide own-experience knowledge. Johan Gott spoke about the fundamental change that the Swedish government ignited in order to move their society and workers' environment away from heavy industry towards a knowledge-based approach. Additionally, we spoke about the effort and transformation of New York City into a greener and more efficient Smart City. Given modern technology he also advised that any kind of available Big Data should be opened to the general public - this openness would provide a playground for anyone to garner new ideas and most probably solid solutions of which no one else thought about before. Emtel Knowledge Series on moving from Cyber Island to Smart Mauritus Later during the afternoon that exact statement regarding openness to and transparency of government-owned Big Data has been emphasised again by the Danish speaker Kim Andersen and his former colleague Mika Jantunen from Finland. Mika continued to underline the important role of the government to provide a solid foundation for a knowledge-based society and mentioned that Finnish citizens have a constitutional right to broadband connectivity. Next to free higher (tertiary) education Finland already produced a good number of innovations, among them are: First country to grant voting rights to women Free higher education Constitutional right to broadband connectivity Nokia Linux Angry Birds Sauna and others...  General access to internet via broadband and/or mobile connectivity is surely a key factor towards Smart Cities, or better said Smart Mauritius given the area dimensions and size of population. CTO Paul Valette gave the audience a brief overview of the essential role that Emtel will have to move Mauritius forward towards a knowledge-based and innovation-driven environment for its citizen. What I have seen looks really promising and with recently published information that Mauritians have 127% of mobile capacity - meaning more than 1 mobile, smartphone or tablet per person - it will be crucial to have the right infrastructure for these connected devices. How would it be possible to achieve a knowledge-based society? YouTube to the rescue!Seriously, gaining more knowledge will require to have fast access to educational course material as explained by Dr Kaviraj Sukon, General Director of the Open University of Mauritius. According to him a good number of high-profile universities in the world have opened their course libraries to the general public, among them EDX, Coursera and Open University. Nowadays, you're actually able and enabled to learn for and earn a BSc or even MSc certification on your own pace - no need to attend classed on campus. It was really impressive to see the number of available hours - more than enough for a life-long learning experience! {loadposition content_adsense} Networking in the name of MSCC As briefly mentioned above I was about to combine two approaches for this workshop. Of course, getting latest information and updates on Emtel services available, especially for my business here on the west coast of the island, but also to meet and greet new people for the MSCC. And I think it was very positive on both sides. Let me quickly describe some of the key aspects that happened during the day: Met with Arnaud Meslier and Kellie, both Microsoft to swap latest information on IT events. Hereby, I got an invite to Microsoft Windows Phone 8.1 Dev Camp. Got in touch with Arvin Lockee, Emtel to check our options to meet with the data team, and seizing the opportunity to have a visiting tour at the Emtel Data Centre. Had a great chat with Avinash Meetoo, Knowledge 7, Kim Andersen and Mika Jantunen about the situation of teaching and learning in general and specifically in the private sector here in Mauritius. Additionally, a number of various other interesting chats... Once again, I'm catching up on a couple of business cards in order to provide more background information about the MSCC, and to create a better awareness of MSCC within the local IT businesses. There is more to come soon!  Resume of the day The number of attendees during this event has been doubled or even tripled this time. The whole organisation has been improved massively and the combination of presentation and summarizing panel discussions was better than during the previous workshop back in February. Overall, once again a well-organised workshop and I'm already looking forward to join the next workshop in Q3. Update End of July we finally managed to visit the Emtel Data Centre in Arsenal. It was an interesting opportunity for some of our MSCC members.

    Read the article

  • CodePlex Daily Summary for Monday, June 02, 2014

    CodePlex Daily Summary for Monday, June 02, 2014Popular ReleasesPortable Class Library for SQLite: Portable Class Library for SQLite - 3.8.4.4: This pull request from mattleibow addresses an issue with custom function creation (define functions in C# code and invoke them from SQLite as id they where regular SQL functions). Impact: Xamarin iOSTweetinvi a friendly Twitter C# API: Tweetinvi 0.9.3.x: Timelines- Added all the parameters available from the Timeline Endpoints in Tweetinvi. - This is available for HomeTimeline, UserTimeline, MentionsTimeline // Simple query var tweets = Timeline.GetHomeTimeline(); // Create a parameter for queries with specific parameters var timelineParameter = Timeline.GenerateHomeTimelineRequestParameter(); timelineParameter.ExcludeReplies = true; timelineParameter.TrimUser = true; var tweets = Timeline.GetHomeTimeline(timelineParameter); Tweets- Add mis...Sandcastle Help File Builder: Help File Builder and Tools v2014.5.31.0: General InformationIMPORTANT: On some systems, the content of the ZIP file is blocked and the installer may fail to run. Before extracting it, right click on the ZIP file, select Properties, and click on the Unblock button if it is present in the lower right corner of the General tab in the properties dialog. This release completes removal of the branding transformations and implements the new VS2013 presentation style that utilizes the new lightweight website format. Several breaking cha...Tooltip Web Preview: ToolTip Web Preview: Version 1.0Database Helper: Release 1.0.0.0: First Release of Database HelperCoMaSy: CoMaSy1.0.2: !Contact Management SystemImage View Slider: Image View Slider: This is a .NET component. We create this using VB.NET. Here you can use an Image Viewer with several properties to your application form. We wish somebody to improve freely. Try this out! Author : Steven Renaldo Antony Yustinus Arjuna Purnama Putra Andre Wijaya P Martin Lidau PBK GENAP 2014 - TI UKDWAspose for Apache POI: Missing Features of Apache POI WP - v 1.1: Release contain the Missing Features in Apache POI WP SDK in Comparison with Aspose.Words for dealing with Microsoft Word. What's New ?Following Examples: Insert Picture in Word Document Insert Comments Set Page Borders Mail Merge from XML Data Source Moving the Cursor Feedback and Suggestions Many more examples are yet to come here. Keep visiting us. Raise your queries and suggest more examples via Aspose Forums or via this social coding site.SEToolbox: 01.032.014 Release 1: Added fix when loading game Textures for icons causing 'Unable to read beyond the end of the stream'. Added new Resource Report, that displays all in game resources in a concise report. Added in temp directory cleaner, to keep excess files from building up. Fixed use of colors on the windows, to work better with desktop schemes. Adding base support for multilingual resources. This will allow loading of the Space Engineers resources to show localized names, and display localized date a...ClosedXML - The easy way to OpenXML: ClosedXML 0.71.2: More memory and performance improvements. Fixed an issue with pivot table field order.Vi-AIO SearchBar: Vi – AIO Search Bar: Version 1.0Top Verses ( Ayat Emas ): Binary Top Verses: This one is the bin folder of the component. the .dll component is inside.Traditional Calendar Component: Traditional Calender Converter: Duta Wacana Christian University This file containing Traditional Calendar Component and Demo Aplication that using Traditional Calendar Component. This component made with .NET Framework 4 and the programming language is C# .SQLSetupHelper: 1.0.0.0: First Stable Version of SQL SetupComposite Iconote: Composite Iconote: This is a composite has been made by Microsoft Visual Studio 2013. Requirement: To develop this composite or use this component in your application, your computer must have .NET framework 4.5 or newer.Magick.NET: Magick.NET 6.8.9.101: Magick.NET linked with ImageMagick 6.8.9.1. Breaking changes: - Int/short Set methods of WritablePixelCollection are now unsigned. - The Q16 build no longer uses HDRI, switch to the new Q16-HDRI build if you need HDRI.fnr.exe - Find And Replace Tool: 1.7: Bug fixes Refactored logic for encoding text values to command line to handle common edge cases where find/replace operation works in GUI but not in command line Fix for bug where selection in Encoding drop down was different when generating command line in some cases. It was reported in: https://findandreplace.codeplex.com/workitem/34 Fix for "Backslash inserted before dot in replacement text" reported here: https://findandreplace.codeplex.com/discussions/541024 Fix for finding replacing...VG-Ripper & PG-Ripper: VG-Ripper 2.9.59: changes NEW: Added Support for 'GokoImage.com' links NEW: Added Support for 'ViperII.com' links NEW: Added Support for 'PixxxView.com' links NEW: Added Support for 'ImgRex.com' links NEW: Added Support for 'PixLiv.com' links NEW: Added Support for 'imgsee.me' links NEW: Added Support for 'ImgS.it' linksToolbox for Dynamics CRM 2011/2013: XrmToolBox (v1.2014.5.28): XrmToolbox improvement XrmToolBox updates (v1.2014.5.28)Fix connecting to a connection with custom authentication without saved password Tools improvement New tool!Solution Components Mover (v1.2014.5.22) Transfer solution components from one solution to another one Import/Export NN relationships (v1.2014.3.7) Allows you to import and export many to many relationships Tools updatesAttribute Bulk Updater (v1.2014.5.28) Audit Center (v1.2014.5.28) View Layout Replicator (v1.2014.5.28) Scrip...Microsoft Ajax Minifier: Microsoft Ajax Minifier 5.10: Fix for Issue #20875 - echo switch doesn't work for CSS CSS should honor the SASS source-file comments JS should allow multi-line comment directivesNew Projects[ISEN] Rendu de projet Naughty3Dogs - Pong3D: Pong3D est un jeu qui reprend le principe classique du Pong en le portant dans un environnement 3D à l'aide du langage c# et du moteur Unity3DBootstrap for MVC: Bootstrap for MVC.F. A. Q. - Najczesciej zadawane pytania: FAQForuMvc: Technifutur short projecthomework456: no.iStoody: Studies organize solution, available through app for Windows and Windows Phone.liaoliao: ???????????Price Tracker: Allows a user to track prices based on parsed emailsRoslynEval: RoslynRx Hub: Rx Hub provides server side computation which initiate by subscriber requestSharepoint Online AppCache Reset: We are an IT resource company providing Virtual IT services and custom and opensource programs for everyday needs. UnitConversionLib : Smart Unit Conversion Library in C#: Conversion of units, arithmetic operation and parsing quantities with their units on run time. Smart unit converter and conversion lib for physical quantities,

    Read the article

  • My vertex shader doesn't affect texture coords or diffuse info but works for position

    - by tina nyaa
    I am new to 3D and DirectX - in the past I have only used abstractions for 2D drawing. Over the past month I've been studying really hard and I'm trying to modify and adapt some of the shaders as part of my personal 'study project'. Below I have a shader, modified from one of the Microsoft samples. I set diffuse and tex0 vertex shader outputs to zero, but my model still shows the full texture and lighting as if I hadn't changed the values from the vertex buffer. Changing the position of the model works, but nothing else. Why is this? // // Skinned Mesh Effect file // Copyright (c) 2000-2002 Microsoft Corporation. All rights reserved. // float4 lhtDir = {0.0f, 0.0f, -1.0f, 1.0f}; //light Direction float4 lightDiffuse = {0.6f, 0.6f, 0.6f, 1.0f}; // Light Diffuse float4 MaterialAmbient : MATERIALAMBIENT = {0.1f, 0.1f, 0.1f, 1.0f}; float4 MaterialDiffuse : MATERIALDIFFUSE = {0.8f, 0.8f, 0.8f, 1.0f}; // Matrix Pallette static const int MAX_MATRICES = 100; float4x3 mWorldMatrixArray[MAX_MATRICES] : WORLDMATRIXARRAY; float4x4 mViewProj : VIEWPROJECTION; /////////////////////////////////////////////////////// struct VS_INPUT { float4 Pos : POSITION; float4 BlendWeights : BLENDWEIGHT; float4 BlendIndices : BLENDINDICES; float3 Normal : NORMAL; float3 Tex0 : TEXCOORD0; }; struct VS_OUTPUT { float4 Pos : POSITION; float4 Diffuse : COLOR; float2 Tex0 : TEXCOORD0; }; float3 Diffuse(float3 Normal) { float CosTheta; // N.L Clamped CosTheta = max(0.0f, dot(Normal, lhtDir.xyz)); // propogate scalar result to vector return (CosTheta); } VS_OUTPUT VShade(VS_INPUT i, uniform int NumBones) { VS_OUTPUT o; float3 Pos = 0.0f; float3 Normal = 0.0f; float LastWeight = 0.0f; // Compensate for lack of UBYTE4 on Geforce3 int4 IndexVector = D3DCOLORtoUBYTE4(i.BlendIndices); // cast the vectors to arrays for use in the for loop below float BlendWeightsArray[4] = (float[4])i.BlendWeights; int IndexArray[4] = (int[4])IndexVector; // calculate the pos/normal using the "normal" weights // and accumulate the weights to calculate the last weight for (int iBone = 0; iBone < NumBones-1; iBone++) { LastWeight = LastWeight + BlendWeightsArray[iBone]; Pos += mul(i.Pos, mWorldMatrixArray[IndexArray[iBone]]) * BlendWeightsArray[iBone]; Normal += mul(i.Normal, mWorldMatrixArray[IndexArray[iBone]]) * BlendWeightsArray[iBone]; } LastWeight = 1.0f - LastWeight; // Now that we have the calculated weight, add in the final influence Pos += (mul(i.Pos, mWorldMatrixArray[IndexArray[NumBones-1]]) * LastWeight); Normal += (mul(i.Normal, mWorldMatrixArray[IndexArray[NumBones-1]]) * LastWeight); // transform position from world space into view and then projection space //o.Pos = mul(float4(Pos.xyz, 1.0f), mViewProj); o.Pos = mul(float4(Pos.xyz, 1.0f), mViewProj); o.Diffuse.x = 0.0f; o.Diffuse.y = 0.0f; o.Diffuse.z = 0.0f; o.Diffuse.w = 0.0f; o.Tex0 = float2(0,0); return o; } technique t0 { pass p0 { VertexShader = compile vs_3_0 VShade(4); } } I am currently using the SlimDX .NET wrapper around DirectX, but the API is extremely similar: public void Draw() { var device = vertexBuffer.Device; device.Clear(ClearFlags.Target | ClearFlags.ZBuffer, Color.White, 1.0f, 0); device.SetRenderState(RenderState.Lighting, true); device.SetRenderState(RenderState.DitherEnable, true); device.SetRenderState(RenderState.ZEnable, true); device.SetRenderState(RenderState.CullMode, Cull.Counterclockwise); device.SetRenderState(RenderState.NormalizeNormals, true); device.SetSamplerState(0, SamplerState.MagFilter, TextureFilter.Anisotropic); device.SetSamplerState(0, SamplerState.MinFilter, TextureFilter.Anisotropic); device.SetTransform(TransformState.World, Matrix.Identity * Matrix.Translation(0, -50, 0)); device.SetTransform(TransformState.View, Matrix.LookAtLH(new Vector3(-200, 0, 0), Vector3.Zero, Vector3.UnitY)); device.SetTransform(TransformState.Projection, Matrix.PerspectiveFovLH((float)Math.PI / 4, (float)device.Viewport.Width / device.Viewport.Height, 10, 10000000)); var material = new Material(); material.Ambient = material.Diffuse = material.Emissive = material.Specular = new Color4(Color.White); material.Power = 1f; device.SetStreamSource(0, vertexBuffer, 0, vertexSize); device.VertexDeclaration = vertexDeclaration; device.Indices = indexBuffer; device.Material = material; device.SetTexture(0, texture); var param = effect.GetParameter(null, "mWorldMatrixArray"); var boneWorldTransforms = bones.OrderedBones.OrderBy(x => x.Id).Select(x => x.CombinedTransformation).ToArray(); effect.SetValue(param, boneWorldTransforms); effect.SetValue(effect.GetParameter(null, "mViewProj"), Matrix.Identity);// Matrix.PerspectiveFovLH((float)Math.PI / 4, (float)device.Viewport.Width / device.Viewport.Height, 10, 10000000)); effect.SetValue(effect.GetParameter(null, "MaterialDiffuse"), material.Diffuse); effect.SetValue(effect.GetParameter(null, "MaterialAmbient"), material.Ambient); effect.Technique = effect.GetTechnique(0); var passes = effect.Begin(FX.DoNotSaveState); for (var i = 0; i < passes; i++) { effect.BeginPass(i); device.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, skin.Vertices.Length, 0, skin.Indicies.Length / 3); effect.EndPass(); } effect.End(); } Again, I set diffuse and tex0 vertex shader outputs to zero, but my model still shows the full texture and lighting as if I hadn't changed the values from the vertex buffer. Changing the position of the model works, but nothing else. Why is this? Also, whatever I set in the bone transformation matrices doesn't seem to have an effect on my model. If I set every bone transformation to a zero matrix, the model still shows up as if nothing had happened, but changing the Pos field in shader output makes the model disappear. I don't understand why I'm getting this kind of behaviour. Thank you!

    Read the article

  • how to define a field of view for the entire map for shadow?

    - by Mehdi Bugnard
    I recently added "Shadow Mapping" in my XNA games to include shadows. I followed the nice and famous tutorial from "Riemers" : http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series3/Shadow_map.php . This code work nice and I can see my source of light and shadow. But the problem is that my light source does not match the field of view that I created. I want the light covers the entire map of my game. I don't know why , but the light only affect 2-3 cubes of my map. ScreenShot: (the emission of light illuminates only 2-3 blocks and not the full map) Here is my code i create the fieldOfView for LightviewProjection Matrix: Vector3 lightDir = new Vector3(10, 52, 10); lightPos = new Vector3(10, 52, 10); Matrix lightsView = Matrix.CreateLookAt(lightPos, new Vector3(105, 50, 105), new Vector3(0, 1, 0)); Matrix lightsProjection = Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver2, 1f, 20f, 1000f); lightsViewProjectionMatrix = lightsView * lightsProjection; As you can see , my nearPlane and FarPlane are set to 20f and 100f . So i don't know why the light stop after 2 cubes. it's should be bigger Here is set the value to my custom effect HLSL in the shader file /* SHADOW VALUE */ effectWorld.Parameters["LightDirection"].SetValue(lightDir); effectWorld.Parameters["xLightsWorldViewProjection"].SetValue(Matrix.Identity * .lightsViewProjectionMatrix); effectWorld.Parameters["xWorldViewProjection"].SetValue(Matrix.Identity * arcadia.camera.View * arcadia.camera.Projection); effectWorld.Parameters["xLightPower"].SetValue(1f); effectWorld.Parameters["xAmbient"].SetValue(0.3f); Here is my custom HLSL shader effect file "*.fx" // This sample uses a simple Lambert lighting model. float3 LightDirection = normalize(float3(-1, -1, -1)); float3 DiffuseLight = 1.25; float3 AmbientLight = 0.25; uniform const float3 DiffuseColor = 1; uniform const float Alpha = 1; uniform const float3 EmissiveColor = 0; uniform const float3 SpecularColor = 1; uniform const float SpecularPower = 16; uniform const float3 EyePosition; // FOG attribut uniform const float FogEnabled ; uniform const float FogStart ; uniform const float FogEnd ; uniform const float3 FogColor ; float3 cameraPos : CAMERAPOS; texture Texture; sampler Sampler = sampler_state { Texture = (Texture); magfilter = LINEAR; minfilter = LINEAR; mipfilter = LINEAR; AddressU = mirror; AddressV = mirror; }; texture xShadowMap; sampler ShadowMapSampler = sampler_state { Texture = <xShadowMap>; magfilter = LINEAR; minfilter = LINEAR; mipfilter = LINEAR; AddressU = clamp; AddressV = clamp; }; /* *************** */ /* SHADOW MAP CODE */ /* *************** */ struct SMapVertexToPixel { float4 Position : POSITION; float4 Position2D : TEXCOORD0; }; struct SMapPixelToFrame { float4 Color : COLOR0; }; struct SSceneVertexToPixel { float4 Position : POSITION; float4 Pos2DAsSeenByLight : TEXCOORD0; float2 TexCoords : TEXCOORD1; float3 Normal : TEXCOORD2; float4 Position3D : TEXCOORD3; }; struct SScenePixelToFrame { float4 Color : COLOR0; }; float DotProduct(float3 lightPos, float3 pos3D, float3 normal) { float3 lightDir = normalize(pos3D - lightPos); return dot(-lightDir, normal); } SSceneVertexToPixel ShadowedSceneVertexShader(float4 inPos : POSITION, float2 inTexCoords : TEXCOORD0, float3 inNormal : NORMAL) { SSceneVertexToPixel Output = (SSceneVertexToPixel)0; Output.Position = mul(inPos, xWorldViewProjection); Output.Pos2DAsSeenByLight = mul(inPos, xLightsWorldViewProjection); Output.Normal = normalize(mul(inNormal, (float3x3)World)); Output.Position3D = mul(inPos, World); Output.TexCoords = inTexCoords; return Output; } SScenePixelToFrame ShadowedScenePixelShader(SSceneVertexToPixel PSIn) { SScenePixelToFrame Output = (SScenePixelToFrame)0; float2 ProjectedTexCoords; ProjectedTexCoords[0] = PSIn.Pos2DAsSeenByLight.x / PSIn.Pos2DAsSeenByLight.w / 2.0f + 0.5f; ProjectedTexCoords[1] = -PSIn.Pos2DAsSeenByLight.y / PSIn.Pos2DAsSeenByLight.w / 2.0f + 0.5f; float diffuseLightingFactor = 0; if ((saturate(ProjectedTexCoords).x == ProjectedTexCoords.x) && (saturate(ProjectedTexCoords).y == ProjectedTexCoords.y)) { float depthStoredInShadowMap = tex2D(ShadowMapSampler, ProjectedTexCoords).r; float realDistance = PSIn.Pos2DAsSeenByLight.z / PSIn.Pos2DAsSeenByLight.w; if ((realDistance - 1.0f / 100.0f) <= depthStoredInShadowMap) { diffuseLightingFactor = DotProduct(xLightPos, PSIn.Position3D, PSIn.Normal); diffuseLightingFactor = saturate(diffuseLightingFactor); diffuseLightingFactor *= xLightPower; } } float4 baseColor = tex2D(Sampler, PSIn.TexCoords); Output.Color = baseColor*(diffuseLightingFactor + xAmbient); return Output; } SMapVertexToPixel ShadowMapVertexShader(float4 inPos : POSITION) { SMapVertexToPixel Output = (SMapVertexToPixel)0; Output.Position = mul(inPos, xLightsWorldViewProjection); Output.Position2D = Output.Position; return Output; } SMapPixelToFrame ShadowMapPixelShader(SMapVertexToPixel PSIn) { SMapPixelToFrame Output = (SMapPixelToFrame)0; Output.Color = PSIn.Position2D.z / PSIn.Position2D.w; return Output; } /* ******************* */ /* END SHADOW MAP CODE */ /* ******************* */ / For rendering without instancing. technique ShadowMap { pass Pass0 { VertexShader = compile vs_2_0 ShadowMapVertexShader(); PixelShader = compile ps_2_0 ShadowMapPixelShader(); } } technique ShadowedScene { /* pass Pass0 { VertexShader = compile vs_2_0 VSBasicTx(); PixelShader = compile ps_2_0 PSBasicTx(); } */ pass Pass1 { VertexShader = compile vs_2_0 ShadowedSceneVertexShader(); PixelShader = compile ps_2_0 ShadowedScenePixelShader(); } } technique SimpleFog { pass Pass0 { VertexShader = compile vs_2_0 VSBasicTx(); PixelShader = compile ps_2_0 PSBasicTx(); } } I edited my fx file , for show you only information and functions about the shadow ;-)

    Read the article

  • DBA Best Practices - A Blog Series: Episode 1 - Backups

    - by Argenis
      This blog post is part of the DBA Best Practices series, on which various topics of concern for daily database operations are discussed. Your feedback and comments are very much welcome, so please drop by the comments section and be sure to leave your thoughts on the subject. Morning Coffee When I was a DBA, the first thing I did when I sat down at my desk at work was checking that all backups had completed successfully. It really was more of a ritual, since I had a dual system in place to check for backup completion: 1) the scheduled agent jobs to back up the databases were set to alert the NOC in failure, and 2) I had a script run from a central server every so often to check for any backup failures. Why the redundancy, you might ask. Well, for one I was once bitten by the fact that database mail doesn't work 100% of the time. Potential causes for failure include issues on the SMTP box that relays your server email, firewall problems, DNS issues, etc. And so to be sure that my backups completed fine, I needed to rely on a mechanism other than having the servers do the taking - I needed to interrogate the servers and ask each one if an issue had occurred. This is why I had a script run every so often. Some of you might have monitoring tools in place like Microsoft System Center Operations Manager (SCOM) or similar 3rd party products that would track all these things for you. But at that moment, we had no resort but to write our own Powershell scripts to do it. Now it goes without saying that if you don't have backups in place, you might as well find another career. Your most sacred job as a DBA is to protect the data from a disaster, and only properly safeguarded backups can offer you peace of mind here. "But, we have a cluster...we don't need backups" Sadly I've heard this line more than I would have liked to. You need to understand that a cluster is comprised of shared storage, and that is precisely your single point of failure. A cluster will protect you from an issue at the Operating System level, and also under an outage of any SQL-related service or dependent devices. But it will most definitely NOT protect you against corruption, nor will it protect you against somebody deleting data from a table - accidentally or otherwise. Backup, fine. How often do I take a backup? The answer to this is something you will hear frequently when working with databases: it depends. What does it depend on? For one, you need to understand how much data your business is willing to lose. This is what's called Recovery Point Objective, or RPO. If you don't know how much data your business is willing to lose, you need to have an honest and realistic conversation about data loss expectations with your customers, internal or external. From my experience, their first answer to the question "how much data loss can you withstand?" will be "zero". In that case, you will need to explain how zero data loss is very difficult and very costly to achieve, even in today's computing environments. Do you want to go ahead and take full backups of all your databases every hour, or even every day? Probably not, because of the impact that taking a full backup can have on a system. That's what differential and transaction log backups are for. Have I answered the question of how often to take a backup? No, and I did that on purpose. You need to think about how much time you have to recover from any event that requires you to restore your databases. This is what's called Recovery Time Objective. Again, if you go ask your customer how long of an outage they can withstand, at first you will get a completely unrealistic number - and that will be your starting point for discussing a solution that is cost effective. The point that I'm trying to get across is that you need to have a plan. This plan needs to be practiced, and tested. Like a football playbook, you need to rehearse the moves you'll perform when the time comes. How often is up to you, and the objective is that you feel better about yourself and the steps you need to follow when emergency strikes. A backup is nothing more than an untested restore Backups are files. Files are prone to corruption. Put those two together and realize how you feel about those backups sitting on that network drive. When was the last time you restored any of those? Restoring your backups on another box - that, by the way, doesn't have to match the specs of your production server - will give you two things: 1) peace of mind, because now you know that your backups are good and 2) a place to offload your consistency checks with DBCC CHECKDB or any of the other DBCC commands like CHECKTABLE or CHECKCATALOG. This is a great strategy for VLDBs that cannot withstand the additional load created by the consistency checks. If you choose to offload your consistency checks to another server though, be sure to run DBCC CHECKDB WITH PHYSICALONLY on the production server, and if you're using SQL Server 2008 R2 SP1 CU4 and above, be sure to enable traceflags 2562 and/or 2549, which will speed up the PHYSICALONLY checks further - you can read more about this enhancement here. Back to the "How Often" question for a second. If you have the disk, and the network latency, and the system resources to do so, why not backup the transaction log often? As in, every 5 minutes, or even less than that? There's not much downside to doing it, as you will have to clear the log with a backup sooner than later, lest you risk running out space on your tlog, or even your drive. The one drawback to this approach is that you will have more files to deal with at restore time, and processing each file will add a bit of extra time to the entire process. But it might be worth that time knowing that you minimized the amount of data lost. Again, test your plan to make sure that it matches your particular needs. Where to back up to? Network share? Locally? SAN volume? This is another topic where everybody has a favorite choice. So, I'll stick to mentioning what I like to do and what I consider to be the best practice in this regard. I like to backup to a SAN volume, i.e., a drive that actually lives in the SAN, and can be easily attached to another server in a pinch, saving you valuable time - you wouldn't need to restore files on the network (slow) or pull out drives out a dead server (been there, done that, it’s also slow!). The key is to have a copy of those backup files made quickly, and, if at all possible, to a remote target on a different datacenter - or even the cloud. There are plenty of solutions out there that can help you put such a solution together. That right there is the first step towards a practical Disaster Recovery plan. But there's much more to DR, and that's material for a different blog post in this series.

    Read the article

< Previous Page | 158 159 160 161 162 163 164 165 166 167 168 169  | Next Page >