Search Results

Search found 19635 results on 786 pages for 'materialized path pattern'.

Page 406/786 | < Previous Page | 402 403 404 405 406 407 408 409 410 411 412 413  | Next Page >

  • C# 4.0: Named And Optional Arguments

    - by Paulo Morgado
    As part of the co-evolution effort of C# and Visual Basic, C# 4.0 introduces Named and Optional Arguments. First of all, let’s clarify what are arguments and parameters: Method definition parameters are the input variables of the method. Method call arguments are the values provided to the method parameters. In fact, the C# Language Specification states the following on §7.5: The argument list (§7.5.1) of a function member invocation provides actual values or variable references for the parameters of the function member. Given the above definitions, we can state that: Parameters have always been named and still are. Parameters have never been optional and still aren’t. Named Arguments Until now, the way the C# compiler matched method call definition arguments with method parameters was by position. The first argument provides the value for the first parameter, the second argument provides the value for the second parameter, and so on and so on, regardless of the name of the parameters. If a parameter was missing a corresponding argument to provide its value, the compiler would emit a compilation error. For this call: Greeting("Mr.", "Morgado", 42); this method: public void Greeting(string title, string name, int age) will receive as parameters: title: “Mr.” name: “Morgado” age: 42 What this new feature allows is to use the names of the parameters to identify the corresponding arguments in the form: name:value Not all arguments in the argument list must be named. However, all named arguments must be at the end of the argument list. The matching between arguments (and the evaluation of its value) and parameters will be done first by name for the named arguments and than by position for the unnamed arguments. This means that, for this method definition: public static void Method(int first, int second, int third) this call declaration: int i = 0; Method(i, third: i++, second: ++i); will have this code generated by the compiler: int i = 0; int CS$0$0000 = i++; int CS$0$0001 = ++i; Method(i, CS$0$0001, CS$0$0000); which will give the method the following parameter values: first: 2 second: 2 third: 0 Notice the variable names. Although invalid being invalid C# identifiers, they are valid .NET identifiers and thus avoiding collision between user written and compiler generated code. Besides allowing to re-order of the argument list, this feature is very useful for auto-documenting the code, for example, when the argument list is very long or not clear, from the call site, what the arguments are. Optional Arguments Parameters can now have default values: public static void Method(int first, int second = 2, int third = 3) Parameters with default values must be the last in the parameter list and its value is used as the value of the parameter if the corresponding argument is missing from the method call declaration. For this call declaration: int i = 0; Method(i, third: ++i); will have this code generated by the compiler: int i = 0; int CS$0$0000 = ++i; Method(i, 2, CS$0$0000); which will give the method the following parameter values: first: 1 second: 2 third: 1 Because, when method parameters have default values, arguments can be omitted from the call declaration, this might seem like method overloading or a good replacement for it, but it isn’t. Although methods like this: public static StreamReader OpenTextFile( string path, Encoding encoding = null, bool detectEncoding = true, int bufferSize = 1024) allow to have its calls written like this: OpenTextFile("foo.txt", Encoding.UTF8); OpenTextFile("foo.txt", Encoding.UTF8, bufferSize: 4096); OpenTextFile( bufferSize: 4096, path: "foo.txt", detectEncoding: false); The complier handles default values like constant fields taking the value and useing it instead of a reference to the value. So, like with constant fields, methods with parameters with default values are exposed publicly (and remember that internal members might be publicly accessible – InternalsVisibleToAttribute). If such methods are publicly accessible and used by another assembly, those values will be hard coded in the calling code and, if the called assembly has its default values changed, they won’t be assumed by already compiled code. At the first glance, I though that using optional arguments for “bad” written code was great, but the ability to write code like that was just pure evil. But than I realized that, since I use private constant fields, it’s OK to use default parameter values on privately accessed methods.

    Read the article

  • ComboBox Control using silverlight

    - by Aamir Hasan
    DropDown.zip (135.33 kb) LiveDemo Introduction In this article i am  going to explore some of the features of the ComboBox.ComboBox makes the collection visible and allows users to pick an item from the collection.After its first initialization, no matter if you bind a new datasource with fewer or more elements, the dropdown persists its original height.One workaround is the following:1. store the Properties from the original ComboBox2. delete the ComboBox removing it from its container3. create a new ComboBox and place it in the container4. recover the stores Properties5. bind the new DataSource to the newly created combobox Creating Silverlight ProjectCreate a new Silverlight 3 Project in VS 2008. Name it as ComboBoxtSample. Simple Data BindingAdd System.Windows.Control.Data reference to the Silverlight project. Silverlight UserControl Add a new page to display Bus data using DataGrid. Following shows Bus column XAML snippet:The ComboBox element represents a ComboBox control in XAML.  <ComboBox></ComboBox>ComboBox XAML        <StackPanel Orientation="Vertical">            <ComboBox Width="120" Height="30" x:Name="DaysDropDownList" DisplayMemberPath="Name">                <!--<ComboBox.ItemTemplate>                    <DataTemplate>                        <StackPanel Orientation="Horizontal">                            <TextBlock Text="{Binding Path=Name}" FontWeight="Bold"></TextBlock>                            <TextBlock Text=", "></TextBlock>                            <TextBlock Text="{Binding Path=ID}"></TextBlock>                        </StackPanel>                    </DataTemplate>                </ComboBox.ItemTemplate>-->            </ComboBox>        </StackPanel>   The following code below is an example implementation Combobox control support data binding     1 By setting the DisplayMemberPath property you can specify which data item in your data you want displayed in the ComboBox.    2 Setting the SelectedIndex allows you to specify which item in the ComboBox you want selected. Business Object public class Bus { public string Name { get; set; } public float Price { get; set; } }   Data Binding private List populatedlistBus() { listBus = new List(); listBus.Add(new Bus() {Name = "Bus 1", Price = 55f }); listBus.Add(new Bus() { Name = "Bus 2", Price = 55.7f }); listBus.Add(new Bus() { Name = "Bus 3", Price = 2f }); listBus.Add(new Bus() { Name = "Bus 4", Price = 6f }); listBus.Add(new Bus() { Name = "Bus 5", Price = 9F }); listBus.Add(new Bus() { Name = "Bus 6", Price = 10.1f }); return listBus; }   The following line of code sets the ItemsSource property of a ComboBox. DaysDropDownList.ItemsSource = populatedlistBus(); Output I hope you enjoyed this simple Silverlight example Conclusion In this article, we saw how data binding works in ComboBox.You learnt how to work with the ComboBox control in Silverlight.

    Read the article

  • top Tweets SOA Partner Community – August 2012

    - by JuergenKress
    Send your tweets @soacommunity #soacommunity and follow us at http://twitter.com/soacommunity Lucas Jellema ?Published an article about organizing Fusion Middleware Administration: http://technology.amis.nl/2012/07/31/organizing-fusion-middleware-administration-in-a-smart-and-frugal-way … - many organizations are struggling with this. ServiceTechSymposium Countdown to the Early Bird Registration Discount deadline. Only 4 days left! http://ow.ly/cBCiv demed ?Good chatting w Bob Rhubart, Thomas Erl & Tim Hall on SOA & Cloud Symposium https://blogs.oracle.com/archbeat/entry/podcast_show_notes_thomas_erl … @soaschool @OTNArchBeat -- CU in London! SOA Community top Tweets SOA Partner Community July 2012 - are you one of them? If yes please rt! https://soacommunity.wordpress.com/2012/07/30/top-tweets-soa-partner-community-july-2012/ … #soacommunity SOA Community ?Are You a facebook member - do You follow http://www.facebook.com/soacommunity ? #soacommunity #soa SOA Community ?SOA 24/7 - Home Page: http://soa247.com/#.UBJsN8n3kyk.twitter … #soacommunity OracleBlogs ?Handling Large Payloads in SOA Suite 11g http://ow.ly/1lFAih OracleBlogs ?SOA Community Newsletter July 2012 http://ow.ly/1lFx6s OTNArchBeat Podcast Show Notes: Thomas Erl on SOA, Cloud, and Service Technology http://bit.ly/OOHTUJ SOA Community SOA Community Newsletter July 2012 http://wp.me/p10C8u-s7 OTNArchBeat ?OTN ArchBeat Podcast: Thomas Erl on SOA, Cloud, and Service Technology - Part 1 http://pub.vitrue.com/fMti OProcessAccel ?Just released! White Paper: Oracle Process Accelerators Best Practices http://www.oracle.com/technetwork/middleware/bpm/learnmore/processaccelbestpracticeswhitepaper-1708910.pdf … OTNArchBeat ?SOA, Cloud, and Service Technologies - Part 1 of 4 - A conversation with SOA, Cloud, and Service Technology Symposiu... http://ow.ly/1lDyAK OracleBlogs ?SOA Suite 11g PS5 Bundled Patch 3 (11.1.1.6.3) http://ow.ly/1lCW1S Simon Haslam My write-up of the virtues of the #ukoug App Server & Middleware SIG http://bit.ly/LMWdfY What's important to you for our next meeting? SOA Community SOA Partner Community Survey 2012 http://wp.me/p10C8u-qY Simone Geib ?RT @jswaroop: #Oracle positioned in the Leader's quadrant - Gartner Magic Quadrants for Application Infrastructure (SOA & SOA Gov)... ServiceTechSymposium New Supporting Organization, IBTI has joined the Symposium! http://www.servicetechsymposium.com/ orclateamsoa ?A-Team Blog #ateam: BPM 11g Task Form Version Considerations http://ow.ly/1lA7XS OTNArchBeat Oracle content at SOA, Cloud and Service Technology Symposium (and discount code!) http://pub.vitrue.com/FPcW OracleBlogs ?BPM 11g Task Form Version Considerations http://ow.ly/1lzOrX OTNArchBeat BPM 11g #ADF Task Form Versioning | Christopher Karl Chan #fusionmiddleware http://pub.vitrue.com/0qP2 OTNArchBeat Lightweight ADF Task Flow for BPM Human Tasks Overview | @AndrejusB #fusionmiddleware http://pub.vitrue.com/z7x9 SOA Community Oracle Fusion Middleware Summer Camps in Lisbon report by Link Consulting http://middlewarebylink.wordpress.com/2012/07/20/oracle-fusion-middleware-summer-camps-in-lisbon/ … #ofmsummercamps #soa #bpm SOA Community ?Clemens Utschig-Utschig & Manas Deb The Successful Execution of the SOA and BPM Vision Using a Business Capability Framework: Concepts… Simone Geib ?RT @oprocessaccel: Just released! White Paper: Oracle Process Accelerators Best Practices http://www.oracle.com/technetwork/middleware/bpm/learnmore/processaccelbestpracticeswhitepaper-1708910.pdf … jornica ?Report from Oracle Fusion Middleware Summer Camps in Munich: SOA Suite 11g advanced training experiences @soacommunity http://bit.ly/Mw3btE Simone Geib ?Bruce Tierney: Update - SOA & BPM Customer Insights Webcast Series: | https://blogs.oracle.com/SOA/entry/update_soa_bpm_customer_insights … OTNArchBeat Business SOA: Thinking is Dead | @mosesjones http://pub.vitrue.com/k8mw esentri ?had 3 great days in Munich at #Oracle #soacommunity Summercamp! Special thanks to Geoffroy de Lamalle from eProseed! Danilo Schmiedel ?Used my time in train to setup the ps5 soa/bpm vbox-image.Works like a dream. Setup-Readme is perfect! Saves a lot of time!!! @soacommunity 18 Jul SOA Community ?THANKS for the excellent OFM summer camps - save trip home - share your pictures at http://www.facebook.com/soacommunity #ofmsummercamps #soacommunity doors BBQ-party with Oracle @soacommunity. 5Star! #lovemunich #ofmsummercamps pic.twitter.com/ztfcGn2S leonsmiers ?New #Capgemini blog post "Continuous Improvement of Business Agility" http://bit.ly/Lr0EwG #bpm #yam Eric Elzinga ?MDS Explorer utility, http://see.sc/4qdb43 #soasuite ServiceTechSymposium ?@techsymp New speaker Demed L’Her from Oracle has been added to the symposium calendar. http://ow.ly/cjnyw SOA Community ?Last day of the Fusion Middleware summer camps - we continue at 9.00 am. send us your barbecue pictures! #ofmsummercamps #soacommunity SOA Community ?Delivering SOA Governance with EAMS and Oracle Enterprise Repository by Link Consulting http://middlewarebylink.wordpress.com/2012/06/26/delivering-soa-governance-with-eams-and-oracle-enterprise-repository/ … #soacommunity #soa #oer OracleBlogs ?Process Accelerator Kit http://ow.ly/1loaCw 15 Jul SOA Community ?Sun is back in Munich! Send your pictures Middleware summer camps! #ofmsummercamps We start tomorrow 11.00 at Oracle pic.twitter.com/6FStxomk Walter Montantes ?Gracias, Obrigado, Thank you, Danke a Lisboa y a @soacommunity @wlscommunity. From the Mexican guys!! cc @mikeintoch #ofmsummercamps Andrejus Baranovskis Tips & Tricks How to Run Oracle BPM 11g PS5 Workspace from Custom ADF 11g Application http://fb.me/1zOf3h2K8 JDeveloper & ADF ?Fusion Apps Enterprise Repository - Explained http://dlvr.it/1rpjWd Steve Walker ?Oracle #Exalogic is the logical choice for running business applications. Exalogic Software 2.0 launches 7/25. Reg at http://bit.ly/NedQ9L A. Chatziantoniou ?Landed in rainy Amsterdam after a great week in Lisbon for the #ofmsummercamps - multo obrigado for Jürgen for another fantastic event SOA Community ?Teams present #BPM11g POC results at #ofmsummercamps - great job! #soacommunity pic.twitter.com/0d4txkWF Sabine Leitner ?#DOAG SIG Middleware 29.08.2012 Köln über MW, Administration, Monitoring http://bit.ly/P47w82 @soacommunity @OracleMW @OracleFMW 12 Jul philmulhall ?Thanks @soacommunity for a great week at the #ofmsummercamps. Hard work done so time for a few cold ones in Lisboa. pic.twitter.com/LVUUuwTh peter230769 ?RT: andrea_rocco_31: RT @soacommunity: Enjoy the networking event at #ofmsummercamps want to attend next time ... pic.twitter.com/D1HRndi4 Niels Gorter ?#ofmsummercamps dinner in Lisbon. Great weather, scenery, training, people, on and on. Big THANKS @soacommunity JDeveloper & ADF ?Running Oracle BPM 11g PS5 Worklist Task Flow and Human Task Form on Non-SOA Domain http://dlvr.it/1r0c2j Andrea Rocco ?RT @soacommunity: Jamy pastry at cafe Belem - who is the ghost there?!? http://via.me/-2x33uk6 Simon Haslam ?Sounds great - sorry I couldn't make it. RT @soacommunity: 6pm BPM advanced training hard work to build the POC #ofmsummercamps philmulhall ?A well earned rest after a hard days work @soacommunity #summercamps pic.twitter.com/LKK7VOVS philmulhall ?Some more hard working delegates @soacommunity #summercamps pic.twitter.com/gWpk1HZh SOA Community ?Error message at the BPM POC - will The #ace director understand the message and solve it? #ofmsummercamps pic.twitter.com/LFTEzNck Daniel Kleine-Albers ?posted on the #thecattlecrew blog: Assigning more memory to JDeveloper http://thecattlecrew.wordpress.com/2012/07/10/jdeveloper-quicktip-assigning-more-memory/ … OTNArchBeat ?BAM design pointers | Kavitha Srinivasan http://pub.vitrue.com/TOhP SOA Community ?Did you receive the July SOA community newsletter? read it! Want to become a member http://www.oracle.com/goto/emea/soa #soacommunity #soa #opn OracleBlogs ?Markus Zirn, Big Data with CEP and SOA @ SOA, Cloud &amp; Service Technology Symposium 2012 http://ow.ly/1lcSkb Andrejus Baranovskis Running Oracle BPM 11g PS5 Worklist Task Flow and Human Task Form on Non-SOA Eric Elzinga ?Service Facade design pattern in OSB, http://bit.ly/NnOExN Eric Elzinga ?New BPEL Thread Pool in SOA 11g for Non-Blocking Invoke Activities from 11.1.1.6 (PS5), http://bit.ly/NnOc2G Gilberto Holms New Post: Siebel Connection Pool in Oracle Service Bus 11g http://wp.me/pRE8V-2z Oracle UPK & Tutor ?UPK Pre-Built Content Update: UPK pre-built content development efforts are always underway and growing. Ove... http://bit.ly/R2HeTj JDeveloper & ADF ?Troubleshooting BPMN process editor problems in 11.1.1.6 http://dlvr.it/1p0FfS orclateamsoa ?A-Team Blog #ateam: BAM design pointers - In working recently with a large Oracle customer on SOA and BAM, I discove... http://ow.ly/1kYqES SOA Community BPMN process editor problems in 11.1.1.6 by Mark Nelson http://redstack.wordpress.com/2012/06/27/bpmn-process-editor-problems-in-11-1-1-6 … #soacommunity #bpm OTNArchBeat ?SOA Learning Library: free short, topic-focused training on Oracle SOA & BPM products | @SOACommunity http://pub.vitrue.com/NE1G Andrejus Baranovskis ?ADF 11g PS5 Application with Customized BPM Worklist Task Flow (MDS Seeded Customization) http://fb.me/1coX4r1X1 OTNArchBeat ?A Universal JMX Client for Weblogic –Part 1: Monitoring BPEL Thread Pools in SOA 11g | Stefan Koser http://pub.vitrue.com/mQVZ OTNArchBeat ?BPM – Disable DBMS job to refresh B2B Materialized View | Mark Nelson http://pub.vitrue.com/3PR0 SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: SOA Community twitter,SOA Community,Oracle SOA,Oracle BPM,BPM Community,OPN,Jürgen Kress

    Read the article

  • Cannot update grub with paramters on live USB

    - by Nanne
    I have booted from a live USB ("Try Ubuntu"), that also has a persistent option set (I used LiLi to create one) to do some tests for this pcie hotplug issue I'm having. I'm trying to test some boot paramaters (like in this question) by doing this sudo nano /etc/default/grub sudo update-grub The problem is that that last command gives me this: /usr/sbin/grub-probe: error: failed to get canonical path of /cow. It looks like /cow is the file-system that is mounted on /, according to: :~# df Filesystem 1K-blocks Used Available Use% Mounted on /cow 4056896 2840204 1007284 74% / udev 1525912 4 1525908 1% /dev tmpfs 613768 844 612924 1% /run .... Is there a way for me to run update-grub?

    Read the article

  • Newbie error with Windows Azure–403 not authorised

    - by simonsabin
    Azure uses certificates to manage secure access to the management side of things. I’ve started looking into azure for some SQLBits . So in doing so I needed to configure certificates to  connect. VS nicely will create a cert for you and copies a path for you to use to upload to azure. I went to the Azure portal ( https://windows.azure.com/Default.aspx ) found the certificate bit and added the certificate. However after doing this I couldn’t connect. After trying numerous fruitless things, including...(read more)

    Read the article

  • FluentPath: a fluent wrapper around System.IO

    .NET is now more than eight years old, and some of its APIs got old with more grace than others. System.IO in particular has always been a little awkward. Its mostly static method calls (Path.*, Directory.*, etc.) and some stateful classes (DirectoryInfo, FileInfo). In these APIs, paths are plain strings. Since .NET v1, lots of good things happened to C#: lambda expressions, extension methods, optional parameters to name just a few. Outside of .NET, other interesting things happened as well. For...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • CLSF & CLK 2013 Trip Report by Jeff Liu

    - by jamesmorris
    This is a contributed post from Jeff Liu, lead XFS developer for the Oracle mainline Linux kernel team. Recently, I attended both the China Linux Storage and Filesystem workshop (CLSF), and the China Linux Kernel conference (CLK), which were held in Shanghai. Here are the highlights for both events. CLSF - 17th October XFS update (led by Jeff Liu) XFS keeps rapid progress with a lot of changes, especially focused on the infrastructure/performance improvements as well as  new feature development.  This can be reflected with a sample statistics among XFS/Ext4+JBD2/Btrfs via: # git diff --stat --minimal -C -M v3.7..v3.12-rc4 -- fs/xfs|fs/ext4+fs/jbd2|fs/btrfs XFS: 141 files changed, 27598 insertions(+), 19113 deletions(-) Ext4+JBD2: 39 files changed, 10487 insertions(+), 5454 deletions(-) Btrfs: 70 files changed, 19875 insertions(+), 8130 deletions(-) What made up those changes in XFS? Self-describing metadata(CRC32c). This is a new feature and it contributed about 70% code changes, it can be enabled via `mkfs.xfs -m crc=1 /dev/xxx` for v5 superblock. Transaction log space reservation improvements. With this change, we can calculate the log space reservation at mount time rather than runtime to reduce the the CPU overhead. User namespace support. So both XFS and USERNS can be enabled on kernel configuration begin from Linux 3.10. Thanks Dwight Engen's efforts for this thing. Split project/group quota inodes. Originally, project quota can not be enabled with group quota at the same time because they were share the same quota file inode, now it works but only for v5 super block. i.e, CRC enabled. CONFIG_XFS_WARN, an new lightweight runtime debugger which can be deployed in production environment. Readahead log object recovery, this change can speed up the log replay progress significantly. Speculative preallocation inode tracking, clearing and throttling. The main purpose is to deal with inodes with post-EOF space due to speculative preallocation, support improved quota management to free up a significant amount of unwritten space when at or near EDQUOT. It support backgroup scanning which occurs on a longish interval(5 mins by default, tunable), and on-demand scanning/trimming via ioctl(2). Bitter arguments ensued from this session, especially for the comparison between Ext4 and Btrfs in different areas, I have to spent a whole morning of the 1st day answering those questions. We basically agreed on XFS is the best choice in Linux nowadays because: Stable, XFS has a good record in stability in the past 10 years. Fengguang Wu who lead the 0-day kernel test project also said that he has observed less error than other filesystems in the past 1+ years, I own it to the XFS upstream code reviewer, they always performing serious code review as well as testing. Good performance for large/small files, XFS does not works very well for small files has already been an old story for years. Best choice (maybe) for distributed PB filesystems. e.g, Ceph recommends delopy OSD daemon on XFS because Ext4 has limited xattr size. Best choice for large storage (>16TB). Ext4 does not support a single file more than around 15.95TB. Scalability, any objection to XFS is best in this point? :) XFS is better to deal with transaction concurrency than Ext4, why? The maximum size of the log in XFS is 2038MB compare to 128MB in Ext4. Misc. Ext4 is widely used and it has been proved fast/stable in various loads and scenarios, XFS just need more customers, and Btrfs is still on the road to be a manhood. Ceph Introduction (Led by Li Wang) This a hot topic.  Li gave us a nice introduction about the design as well as their current works. Actually, Ceph client has been included in Linux kernel since 2.6.34 and supported by Openstack since Folsom but it seems that it has not yet been widely deployment in production environment. Their major work is focus on the inline data support to separate the metadata and data storage, reduce the file access time, i.e, a file access need communication twice, fetch the metadata from MDS and then get data from OSD, and also, the small file access is limited by the network latency. The solution is, for the small files they would like to store the data at metadata so that when accessing a small file, the metadata server can push both metadata and data to the client at the same time. In this way, they can reduce the overhead of calculating the data offset and save the communication to OSD. For this feature, they have only run some small scale testing but really saw noticeable improvements. Test environment: Intel 2 CPU 12 Core, 64GB RAM, Ubuntu 12.04, Ceph 0.56.6 with 200GB SATA disk, 15 OSD, 1 MDS, 1 MON. The sequence read performance for 1K size files improved about 50%. I have asked Li and Zheng Yan (the core developer of Ceph, who also worked on Btrfs) whether Ceph is really stable and can be deployed at production environment for large scale PB level storage, but they can not give a positive answer, looks Ceph even does not spread over Dreamhost (subject to confirmation). From Li, they only deployed Ceph for a small scale storage(32 nodes) although they'd like to try 6000 nodes in the future. Improve Linux swap for Flash storage (led by Shaohua Li) Because of high density, low power and low price, flash storage (SSD) is a good candidate to partially replace DRAM. A quick answer for this is using SSD as swap. But Linux swap is designed for slow hard disk storage, so there are a lot of challenges to efficiently use SSD for swap. SWAPOUT swap_map scan swap_map is the in-memory data structure to track swap disk usage, but it is a slow linear scan. It will become a bottleneck while finding many adjacent pages in the use of SSD. Shaohua Li have changed it to a cluster(128K) list, resulting in O(1) algorithm. However, this apporoach needs restrictive cluster alignment and only enabled for SSD. IO pattern In most cases, the swap io is in interleaved pattern because of mutiple reclaimers or a free cluster is shared by all reclaimers. Even though block layer can merge interleaved IO to some extent, but we cannot count on it completely. Hence the per-cpu cluster is added base on the previous change, it can help reclaimer do sequential IO and the block layer will be easier to merge IO. TLB flush: If we're reclaiming one active page, we should first move the page from active lru list to inactive lru list, and then reclaim the page from inactive lru to swap it out. During the process, we need to clear PTE twice: first is 'A'(ACCESS) bit, second is 'P'(PRESENT) bit. Processors need to send lots of ipi which make the TLB flush really expensive. Some works have been done to improve this, including rework smp_call_functiom_many() or remove the first TLB flush in x86, but there still have some arguments here and only parts of works have been pushed to mainline. SWAPIN: Page fault does iodepth=1 sync io, but it's a little waste if only issue a page size's IO. The obvious solution is doing swap readahead. But the current in-kernel swap readahead is arbitary(always 8 pages), and it always doesn't perform well for both random and sequential access workload. Shaohua introduced a new flag for madvise(MADV_WILLNEED) to do swap prefetch, so the changes happen in userspace API and leave the in-kernel readahead unchanged(but I think some improvement can also be done here). SWAP discard As we know, discard is important for SSD write throughout, but the current swap discard implementation is synchronous. He changed it to async discard which allow discard and write run in the same time. Meanwhile, the unit of discard is also optimized to cluster. Misc: lock contention For many concurrent swapout and swapin , the lock contention such as anon_vma or swap_lock is high, so he changed the swap_lock to a per-swap lock. But there still have some lock contention in very high speed SSD because of swapcache address_space lock. Zproject (led by Bob Liu) Bob gave us a very nice introduction about the current memory compression status. Now there are 3 projects(zswap/zram/zcache) which all aim at smooth swap IO storm and promote performance, but they all have their own pros and cons. ZSWAP It is implemented based on frontswap API and it uses a dynamic allocater named Zbud to allocate free pages. Zbud means pairs of zpages are "buddied" and it can only store at most two compressed pages in one page frame, so the max compress ratio is 50%. Each page frame is lru-linked and can do shink in memory pressure. If the compressed memory pool reach its limitation, shink or reclaim happens. It decompress the page frame into two new allocated pages and then write them to real swap device, but it can fail when allocating the two pages. ZRAM Acts as a compressed ramdisk and used as swap device, and it use zsmalloc as its allocator which has high density but may have fragmentation issues. Besides, page reclaim is hard since it will need more pages to uncompress and free just one page. ZRAM is preferred by embedded system which may not have any real swap device. Now both ZRAM and ZSWAP are in driver/staging tree, and in the mm community there are some disscussions of merging ZRAM into ZSWAP or viceversa, but no agreement yet. ZCACHE Handles file page compression but it is removed out of staging recently. From industry (led by Tang Jie, LSI) An LSI engineer introduced several new produces to us. The first is raid5/6 cards that it use full stripe writes to improve performance. The 2nd one he introduced is SandForce flash controller, who can understand data file types (data entropy) to reduce write amplification (WA) for nearly all writes. It's called DuraWrite and typical WA is 0.5. What's more, if enable its Dynamic Logical Capacity function module, the controller can do data compression which is transparent to upper layer. LSI testing shows that with this virtual capacity enables 1x TB drive can support up to 2x TB capacity, but the application must monitor free flash space to maintain optimal performance and to guard against free flash space exhaustion. He said the most useful application is for datebase. Another thing I think it's worth to mention is that a NV-DRAM memory in NMR/Raptor which is directly exposed to host system. Applications can directly access the NV-DRAM via a memory address - using standard system call mmap(). He said that it is very useful for database logging now. This kind of NVM produces are beginning to appear in recent years, and it is said that Samsung is building a research center in China for related produces. IMHO, NVM will bring an effect to current os layer especially on file system, e.g. its journaling may need to redesign to fully utilize these nonvolatile memory. OCFS2 (led by Canquan Shen) Without a doubt, HuaWei is the biggest contributor to OCFS2 in the past two years. They have posted 46 upstream patches and 39 patches have been merged. Their current project is based on 32/64 nodes cluster, but they also tried 128 nodes at the experimental stage. The major work they are working is to support ATS (atomic test and set), it can be works with DLM at the same time. Looks this idea is inspired by the vmware VMFS locking, i.e, http://blogs.vmware.com/vsphere/2012/05/vmfs-locking-uncovered.html CLK - 18th October 2013 Improving Linux Development with Better Tools (Andi Kleen) This talk focused on how to find/solve bugs along with the Linux complexity growing. Generally, we can do this with the following kind of tools: Static code checkers tools. e.g, sparse, smatch, coccinelle, clang checker, checkpatch, gcc -W/LTO, stanse. This can help check a lot of things, simple mistakes, complex problems, but the challenges are: some are very slow, false positives, may need a concentrated effort to get false positives down. Especially, no static checker I found can follow indirect calls (“OO in C”, common in kernel): struct foo_ops { int (*do_foo)(struct foo *obj); } foo->do_foo(foo); Dynamic runtime checkers, e.g, thread checkers, kmemcheck, lockdep. Ideally all kernel code would come with a test suite, then someone could run all the dynamic checkers. Fuzzers/test suites. e.g, Trinity is a great tool, it finds many bugs, but needs manual model for each syscall. Modern fuzzers around using automatic feedback, but notfor kernel yet: http://taviso.decsystem.org/making_software_dumber.pdf Debuggers/Tracers to understand code, e.g, ftrace, can dump on events/oops/custom triggers, but still too much overhead in many cases to run always during debug. Tools to read/understand source, e.g, grep/cscope work great for many cases, but do not understand indirect pointers (OO in C model used in kernel), give us all “do_foo” instances: struct foo_ops { int (*do_foo)(struct foo *obj); } = { .do_foo = my_foo }; foo>do_foo(foo); That would be great to have a cscope like tool that understands this based on types/initializers XFS: The High Performance Enterprise File System (Jeff Liu) [slides] I gave a talk for introducing the disk layout, unique features, as well as the recent changes.   The slides include some charts to reflect the performances between XFS/Btrfs/Ext4 for small files. About a dozen users raised their hands when I asking who has experienced with XFS. I remembered that when I asked the same question in LinuxCon/Japan, only 3 people raised their hands, but they are Chris Mason, Ric Wheeler, and another attendee. The attendee questions were mainly focused on stability, and comparison with other file systems. Linux Containers (Feng Gao) The speaker introduced us that the purpose for those kind of namespaces, include mount/UTS/IPC/Network/Pid/User, as well as the system API/ABI. For the userspace tools, He mainly focus on the Libvirt LXC rather than us(LXC). Libvirt LXC is another userspace container management tool, implemented as one type of libvirt driver, it can manage containers, create namespace, create private filesystem layout for container, Create devices for container and setup resources controller via cgroup. In this talk, Feng also mentioned another two possible new namespaces in the future, the 1st is the audit, but not sure if it should be assigned to user namespace or not. Another is about syslog, but the question is do we really need it? In-memory Compression (Bob Liu) Same as CLSF, a nice introduction that I have already mentioned above. Misc There were some other talks related to ACPI based memory hotplug, smart wake-affinity in scheduler etc., but my head is not big enough to record all those things. -- Jeff Liu

    Read the article

  • How to create a new Team Project Collection in TFS2010:

    - by jehan
    TFS 2010 has introduced the notion of Team Project Collection (TPC).  I have already discussed about TPC in my earlier post, you can check it out here. In this post, I will demonstrate how to create a new Team Project Collection in TFS2010. First, you have to open the TFS Administration Console (Start à All Programs à Microsoft Team Foundation Server 2010 à Team Foundation Server Administration Console), expand the Application Tier node in TFS Administration Console and click on Team Project Collection. Here you will see the TPC’s which are already exist, I am having only one TPC named New Collection and I’m going to create a new TPC called Demo Collection. To create a new Team Project Collection, you need to click on Create Collection; it will open the Create New Team Project Collection window.     Under the Name tab, you have to enter the name of Collection which you want to give for your new TPC (I naming it as Demo Collection). You can also provide some description about your TPC in Description tab which is optional and click next. Here, you need to enter the name of SQL Server Instance where you want your new TPC data to reside. You have the option either to choose the creating a Database for this TPC or use the already existing empty database and then click next.   In next screen, you have to choose SharePoint configuration. Here you have the options to either configure SharePoint Site for TPC at default collections or you can specify the your existing SharePoint site and  you can also choose not  to configure the SharePoint for this collection, if you choose last option then you cannot configure the Share Point sites for the all the Team Projects under this Project Collection. You also have the flexibility to create a Share Point site for this TPC later on, then if you need you have to configure SharePoint site for the existing team projects manually.   In next screen, you will have the Reports configuration. Here you have the options to either configure the Reports for TPC at default path or you can specify the path for at existing Reports folder, you can also choose not to configure the Reports for this collection, if you choose last option then you cannot create  the Reports  for the all the Team Projects under this Project Collection. Here also you can enable reporting for this TPC later on. The next screen is related to Lab Management Configuration, Lab Management is the new feature in TFS2010 which enables the users to create and manage virtual test environments where you can deploy and test your application. There are no options available here as I don’t have the Lab Management configured for my Team Foundation Server. The next screen is Review Configuration window, which will show up all the configuration settings you have specified, so that you can review the configurations before creating the Team Project Collection. If you want to make any changes to the configurations then you can go back to the previous windows and can make the changes. After Reviewing the configuration settings, you can click on verify button. Which will verify that if you’re Team Project Collection is ready to be created or not, it will show up the errors and warning (if any) which can make your Team Project Collection fail. You can then choose to create the Team Project Collection if the verify option doesn’t throw any warnings and errors. If the verify option throws any errors, then it is strongly suggested that you have to first rectify the issues then only go for TPC creation especially in case of warnings as it is a common practice to overlook the warnings.   If you choose the create TPC option, then it will start the process of creating a Team Project Collection  and once its completed you can check the status of configuration different components  during Team Project Collection. You can see in below screen that all the components are configured successfully.   In next screen, you can find the location of log file created for this Team Project Creation, this log file is really important in case of Team Project creation failure because it will help you to find  the root cause for the failure. Now, you can see that the New Team Projection (Demo Collection) which was created is now available in Team Foundation Collection tab and its status is Online.   You can now try to connect to this Team Project Collection from Team Explorer. Choose the newly created Team Project Collection and click on connect.     This Team Project Collection is empty because no Team Projects are created yet. Now, you can create the new Team Projects and start working.

    Read the article

  • How to Add a Business Card, or vCard (.vcf) File, to a Signature in Outlook 2013 Without Displaying an Image

    - by Lori Kaufman
    Whenever you add a Business Card to your signature in Outlook 2013, the Signature Editor automatically generates a picture of it and includes that in the signature as well as attaching the .vcf file. However, there is a way to leave out the image. To remove the business card image from your signature but maintain the attached .vcf file, you must make a change to the registry. NOTE: Before making changes to the registry, be sure you back it up. We also recommend creating a restore point you can use to restore your system if something goes wrong. Before changing the registry, we must add the Business Card to the signature and save it so a .vcf file of the contact is created in the Signatures folder. To do this, click the File tab. Click Options in the menu list on the left side of the Account Information screen. On the Outlook Options dialog box, click Mail in the list of options on the left side of the dialog box. On the Mail screen, click Signatures in the Compose messages section. For this example, we will create a new signature to include the .vcf file for your business card without the image. Click New below the Select signature to edit box. Enter a name for the new signature, such as Business Card, and click OK. Enter text in the signature editor and format it the way you want or insert a different image or logo. Click Business Card above the signature editor. Select the contact you want to include in the signature on the Insert Business Card dialog box and click OK. Click Save below the Select signature to edit box. This creates a .vcf file for the business card in the Signatures folder. Click on the business card image in the signature and delete it. You should only see your formatted text or other image or logo in the signature editor. Click OK to save your new signature and close the signature editor. Close Outlook as well. Now, we will open the Registry Editor to add a key and value to indicate where to find the .vcf to include in the signature we just created. If you’re running Windows 8, press the Windows Key + X to open the command menu and select Run. You can also press the Windows Key + R to directly access the Run dialog box. NOTE: In Windows 7, select Run from the Start menu. In the Open edit box on the Run dialog box, enter “regedit” (without the quotes) and click OK. If the User Account Control dialog box displays, click Yes to continue. NOTE: You may not see this dialog box, depending on your User Account Control settings. Navigate to the following registry key: HKEY_CURRENT_USER\Software\Microsoft\Office\15.0\Outlook\Signatures Make sure the Signatures key is selected. Select New | String Value from the Edit menu. NOTE: You can also right-click in the empty space in the right pane and select New | String Value from the popup menu. Rename the new value to the name of the Signature you created. For this example, we named the value Business Card. Double-click on the new value. In the Value data edit box on the Edit String dialog box, enter the value indicating the location of the .vcf file to include in the signature. The format is: <signature name>_files\<name of .vcf file> For our example, the Value data should be as follows: Business Card_files\Lori Kaufman The name of the .vcf file is generally the contact name. If you’re not sure of what to enter for the Value data for the new key value, you can check the location and name of the .vcf file. To do this, open the Outlook Options dialog box and access the Mail screen as instructed earlier in this article. However, press and hold the Ctrl key while clicking the Signatures button. The Signatures folder opens in Windows Explorer. There should be a folder in the Signatures folder named after the signature you created with “_files” added to the end. For our example, the folder is named Business Card_files. Open this folder. In this folder, you should see a .vcf file with the name of your contact as the name of the file. For our contact, the file is named Lori Kaufman.vcf. The path to the .vcf file should be the name of the folder for the signature (Business Card_files), followed by a “\”, and the name of the .vcf file without the extension (Lori Kaufman). Putting these names together, you get the path that should be entered as the Value data in the new key you created in the Registry Editor. Business Card_files\Lori Kaufman Once you’ve entered the Value data for the new key, select Exit from the File menu to close the Registry Editor. Open Outlook and click New Email on the Home tab. Click Signature in the Include section of the New Mail Message tab and select your new signature from the drop-down menu. NOTE: If you made the new signature the default signature, it will be automatically inserted into the new mail message. The .vcf file is attached to the email message, but the business card image is not included. All you will see in the body of the email message is the text or other image you included in the signature. You can also choose to include an image of your business card in a signature with no .vcf file attached.     

    Read the article

  • .NET 3.5 Installation Problems in Windows 8

    - by Rick Strahl
    Windows 8 installs with .NET 4.5. A default installation of Windows 8 doesn't seem to include .NET 3.0 or 3.5, although .NET 2.0 does seem to be available by default (presumably because Windows has app dependencies on that). I ran into some pretty nasty compatibility issues regarding .NET 3.5 which I'll describe in this post. I'll preface this by saying that depending on how you install Windows 8 you may not run into these issues. In fact, it's probably a special case, but one that might be common with developer folks reading my blog. Specifically it's the install order that screwed things up for me -  installing Visual Studio before explicitly installing .NET 3.5 from Windows Features - in particular. If you install Visual Studio 2010 I highly recommend you install .NET 3.5 from Windows features BEFORE you install Visual Studio 2010 and save yourself the trouble I went through. So when I installed Windows 8, and then looked at the Windows Features to install after the fact in the Windows Feature dialog, I thought - .NET 3.5 - who needs it. I'd be happy to not have to install .NET 3.5, but unfortunately I found out quite a while after initial installation that one of my applications/tools (DevExpress's awesome CodeRush) depends on it and won't install without it. Enabling .NET 3.5 in Windows 8 If you want to run .NET 3.5 on Windows 8, don't download an installer - those installers don't work on Windows 8, and you don't need to do this because you can use the Windows Features dialog to enable .NET 3.5: And that *should* do the trick. If you do this before you install other apps that require .NET 3.5 and install a non-SP1 one version of it, you are going to have no problems. Unfortunately for me, even after I've installed the above, when I run the CodeRush installer I still get this lovely dialog: Now I double checked to see if .NET 3.5 is installed - it is, both for 32 bit and 64 bit. I went as far as creating a small .NET Console app and running it to verify that it actually runs. And it does… So naturally I thought the CodeRush installer is a little whacky. After some back and forth Alex Skorkin on Twitter pointed me in the right direction: He asked me to look in the registry for exact info on which version of .NET 3.5 is installed here: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP where I found that .NET 3.5 SP1 was installed. This is the 64 bit key which looks all correct. However, when I looked under the 32 bit node I found: HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\NET Framework Setup\NDP\v3.5 Notice that the service pack number is set to 0, rather than 1 (which it was for the 64 bit install), which is what the installer requires. So to summarize: the 64 bit version is installed with SP1, the 32 bit version is not. Uhm, Ok… thanks for that! Easy to fix, you say - just install SP1. Nope, not so easy because the standalone installer doesn't work on Windows 8. I can't get either .NET 3.5 installer or the SP 1 installer to even launch. They simply start and hang (or exit immediately) without messages. I also tried to get Windows to update .NET 3.5 by checking for Windows Updates, which should pick up on the dated version of .NET 3.5 and pull down SP1, but that's also no go. Check for Updates doesn't bring down any updates for me yet. I'm sure at some random point in the future Windows will deem it necessary to update .NET 3.5 to SP1, but at this point it's not letting me coerce it to do it explicitly. How did this happen I'm not sure exactly whether this is the cause and effect, but I suspect the story goes like this: Installed Windows 8 without support for .NET 3.5 Installed Visual Studio 2010 which installs .NET 3.5 (no SP) I now had .NET 3.5 installed but without SP1. I then: Tried to install CodeRush - Error: .NET 3.5 SP1 required Enabled .NET 3.5 in Windows Features I figured enabling the .NET 3.5 Windows Features would do the trick. But still no go. Now I suspect Visual Studio installed the 32 bit version of .NET 3.5 on my machine and Windows Features detected the previous install and didn't reinstall it. This left the 32 bit install at least with no SP1 installed. How to Fix it My final solution was to completely uninstall .NET 3.5 *and* to reboot: Go to Windows Features Uncheck the .NET Framework 3.5 Restart Windows Go to Windows Features Check .NET Framework 3.5 and voila, I now have a proper installation of .NET 3.5. I tried this before but without the reboot step in between which did not work. Make sure you reboot between uninstalling and reinstalling .NET 3.5! More Problems The above fixed me right up, but in looking for a solution it seems that a lot of people are also having problems with .NET 3.5 installing properly from the Windows Features dialog. The problem there is that the feature wasn't properly loading from the installer disks or not downloading the proper components for updates. It turns out you can explicitly install Windows features using the DISM tool in Windows.dism.exe /online /enable-feature /featurename:NetFX3 /Source:f:\sources\sxs You can try this without the /Source flag first - which uses the hidden Windows installer files if you kept those. Otherwise insert the DVD or ISO and point at the path \sources\sxs path where the installer lives. This also gives you a little more information if something does go wrong.© Rick Strahl, West Wind Technologies, 2005-2012Posted in Windows  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Auto blocking attacking IP address

    - by dong
    This is to share my PowerShell code online. I original asked this question on MSDN forum (or TechNet?) here: http://social.technet.microsoft.com/Forums/en-US/winserversecurity/thread/f950686e-e3f8-4cf2-b8ec-2685c1ed7a77 In short, this is trying to find attacking IP address then add it into Firewall block rule. So I suppose: 1, You are running a Windows Server 2008 facing the Internet. 2, You need to have some port open for service, e.g. TCP 21 for FTP; TCP 3389 for Remote Desktop. You can see in my code I’m only dealing with these two since that’s what I opened. You can add further port number if you like, but the way to process might be different with these two. 3, I strongly suggest you use STRONG password and follow all security best practices, this ps1 code is NOT for adding security to your server, but reduce the nuisance from brute force attack, and make sys admin’s life easier: i.e. your FTP log won’t hold megabytes of nonsense, your Windows system log will not roll back and only can tell you what happened last month. 4, You are comfortable with setting up Windows Firewall rules, in my code, my rule has a name of “MY BLACKLIST”, you need to setup a similar one, and set it to BLOCK everything. 5, My rule is dangerous because it has the risk to block myself out as well. I do have a backup plan i.e. the DELL DRAC5 so that if that happens, I still can remote console to my server and reset the firewall. 6, By no means the code is perfect, the coding style, the use of PowerShell skills, the hard coded part, all can be improved, it’s just that it’s good enough for me already. It has been running on my server for more than 7 MONTHS. 7, Current code still has problem, I didn’t solve it yet, further on this point after the code. :)    #Dong Xie, March 2012  #my simple code to monitor attack and deal with it  #Windows Server 2008 Logon Type  #8: NetworkCleartext, i.e. FTP  #10: RemoteInteractive, i.e. RDP    $tick = 0;  "Start to run at: " + (get-date);    $regex1 = [regex] "192\.168\.100\.(?:101|102):3389\s+(\d+\.\d+\.\d+\.\d+)";  $regex2 = [regex] "Source Network Address:\t(\d+\.\d+\.\d+\.\d+)";    while($True) {   $blacklist = @();     "Running... (tick:" + $tick + ")"; $tick+=1;    #Port 3389  $a = @()  netstat -no | Select-String ":3389" | ? { $m = $regex1.Match($_); `    $ip = $m.Groups[1].Value; if ($m.Success -and $ip -ne "10.0.0.1") {$a = $a + $ip;} }  if ($a.count -gt 0) {    $ips = get-eventlog Security -Newest 1000 | Where-Object {$_.EventID -eq 4625 -and $_.Message -match "Logon Type:\s+10"} | foreach { `      $m = $regex2.Match($_.Message); $ip = $m.Groups[1].Value; $ip; } | Sort-Object | Tee-Object -Variable list | Get-Unique    foreach ($ip in $a) { if ($ips -contains $ip) {      if (-not ($blacklist -contains $ip)) {        $attack_count = ($list | Select-String $ip -SimpleMatch | Measure-Object).count;        "Found attacking IP on 3389: " + $ip + ", with count: " + $attack_count;        if ($attack_count -ge 20) {$blacklist = $blacklist + $ip;}      }      }    }  }      #FTP  $now = (Get-Date).AddMinutes(-5); #check only last 5 mins.     #Get-EventLog has built-in switch for EventID, Message, Time, etc. but using any of these it will be VERY slow.  $count = (Get-EventLog Security -Newest 1000 | Where-Object {$_.EventID -eq 4625 -and $_.Message -match "Logon Type:\s+8" -and `              $_.TimeGenerated.CompareTo($now) -gt 0} | Measure-Object).count;  if ($count -gt 50) #threshold  {     $ips = @();     $ips1 = dir "C:\inetpub\logs\LogFiles\FPTSVC2" | Sort-Object -Property LastWriteTime -Descending `       | select -First 1 | gc | select -Last 200 | where {$_ -match "An\+error\+occured\+during\+the\+authentication\+process."} `        | Select-String -Pattern "(\d+\.\d+\.\d+\.\d+)" | select -ExpandProperty Matches | select -ExpandProperty value | Group-Object `        | where {$_.Count -ge 10} | select -ExpandProperty Name;       $ips2 = dir "C:\inetpub\logs\LogFiles\FTPSVC3" | Sort-Object -Property LastWriteTime -Descending `       | select -First 1 | gc | select -Last 200 | where {$_ -match "An\+error\+occured\+during\+the\+authentication\+process."} `        | Select-String -Pattern "(\d+\.\d+\.\d+\.\d+)" | select -ExpandProperty Matches | select -ExpandProperty value | Group-Object `        | where {$_.Count -ge 10} | select -ExpandProperty Name;     $ips += $ips1; $ips += $ips2; $ips = $ips | where {$_ -ne "10.0.0.1"} | Sort-Object | Get-Unique;         foreach ($ip in $ips) {       if (-not ($blacklist -contains $ip)) {        "Found attacking IP on FTP: " + $ip;        $blacklist = $blacklist + $ip;       }     }  }        #Firewall change <# $current = (netsh advfirewall firewall show rule name="MY BLACKLIST" | where {$_ -match "RemoteIP"}).replace("RemoteIP:", "").replace(" ","").replace("/255.255.255.255",""); #inside $current there is no \r or \n need remove. foreach ($ip in $blacklist) { if (-not ($current -match $ip) -and -not ($ip -like "10.0.0.*")) {"Adding this IP into firewall blocklist: " + $ip; $c= 'netsh advfirewall firewall set rule name="MY BLACKLIST" new RemoteIP="{0},{1}"' -f $ip, $current; Invoke-Expression $c; } } #>    foreach ($ip in $blacklist) {    $fw=New-object –comObject HNetCfg.FwPolicy2; # http://blogs.technet.com/b/jamesone/archive/2009/02/18/how-to-manage-the-windows-firewall-settings-with-powershell.aspx    $myrule = $fw.Rules | where {$_.Name -eq "MY BLACKLIST"} | select -First 1; # Potential bug here?    if (-not ($myrule.RemoteAddresses -match $ip) -and -not ($ip -like "10.0.0.*"))      {"Adding this IP into firewall blocklist: " + $ip;         $myrule.RemoteAddresses+=(","+$ip);      }  }    Wait-Event -Timeout 30 #pause 30 secs    } # end of top while loop.   Further points: 1, I suppose the server is listening on port 3389 on server IP: 192.168.100.101 and 192.168.100.102, you need to replace that with your real IP. 2, I suppose you are Remote Desktop to this server from a workstation with IP: 10.0.0.1. Please replace as well. 3, The threshold for 3389 attack is 20, you don’t want to block yourself just because you typed your password wrong 3 times, you can change this threshold by your own reasoning. 4, FTP is checking the log for attack only to the last 5 mins, you can change that as well. 5, I suppose the server is serving FTP on both IP address and their LOG path are C:\inetpub\logs\LogFiles\FPTSVC2 and C:\inetpub\logs\LogFiles\FPTSVC3. Change accordingly. 6, FTP checking code is only asking for the last 200 lines of log, and the threshold is 10, change as you wish. 7, the code runs in a loop, you can set the loop time at the last line. To run this code, copy and paste to your editor, finish all the editing, get it to your server, and open an CMD window, then type powershell.exe –file your_powershell_file_name.ps1, it will start running, you can Ctrl-C to break it. This is what you see when it’s running: This is when it detected attack and adding the firewall rule: Regarding the design of the code: 1, There are many ways you can detect the attack, but to add an IP into a block rule is no small thing, you need to think hard before doing it, reason for that may include: You don’t want block yourself; and not blocking your customer/user, i.e. the good guy. 2, Thus for each service/port, I double check. For 3389, first it needs to show in netstat.exe, then the Event log; for FTP, first check the Event log, then the FTP log files. 3, At three places I need to make sure I’m not adding myself into the block rule. –ne with single IP, –like with subnet.   Now the final bit: 1, The code will stop working after a while (depends on how busy you are attacked, could be weeks, months, or days?!) It will throw Red error message in CMD, don’t Panic, it does no harm, but it also no longer blocking new attack. THE REASON is not confirmed with MS people: the COM object to manage firewall, you can only give it a list of IP addresses to the length of around 32KB I think, once it reaches the limit, you get the error message. 2, This is in fact my second solution to use the COM object, the first solution is still in the comment block for your reference, which is using netsh, that fails because being run from CMD, you can only throw it a list of IP to 8KB. 3, I haven’t worked the workaround yet, some ideas include: wrap that RemoteAddresses setting line with error checking and once it reaches the limit, use the newly detected IP to be the list, not appending to it. This basically reset your block rule to ground zero and lose the previous bad IPs. This does no harm as it sounds, because given a certain period has passed, any these bad IPs still not repent and continue the attack to you, it only got 30 seconds or 20 guesses of your password before you block it again. And there is the benefit that the bad IP may turn back to the good hands again, and you are not blocking a potential customer or your CEO’s home pc because once upon a time, it’s a zombie. Thus the ZEN of blocking: never block any IP for too long. 4, But if you insist to block the ugly forever, my other ideas include: You call MS support, ask them how can we set an arbitrary length of IP addresses in a rule; at least from my experiences at the Forum, they don’t know and they don’t care, because they think the dynamic blocking should be done by some expensive hardware. Or, from programming perspective, you can create a new rule once the old is full, then you’ll have MY BLACKLIST1, MY  BLACKLIST2, MY BLACKLIST3, … etc. Once in a while you can compile them together and start a business to sell your blacklist on the market! Enjoy the code! p.s. (PowerShell is REALLY REALLY GREAT!)

    Read the article

  • Installing SharePoint 2013 on Windows 2012- standalone installation

    - by sreejukg
    In this article, I am going to share my experience while installing SharePoint 2013 on Windows 2012. This was the first time I tried SharePoint 2013. So I thought sharing the same will benefit somebody who would like to install SharePoint 2013 as a standalone installation. Standalone installation is meant for evaluation/development purposes. For production environments, you need to follow the best practices and create required service accounts. Microsoft has published the deployment guide for SharePoint 2013, you can download this from the below link. http://www.microsoft.com/en-us/download/details.aspx?id=30384 Since this is for development environment, I am not going to create any service account, I logged in to Windows 2012 as an administrator and just placed my installation DVD on the drive. When I run the setup from the DVD, the below splash screen appears. This reflects the new UI changes happening with all Microsoft based applications; the interface matches the metro style applications (Windows 8 style). As you can see the options are same as that of the SharePoint 2010 installation screen. Click on the “install software prerequisites” link to get all the prerequisites get installed. You need a valid internet connection to do this. Clicking on the install software prerequisites will bring the following dialog. Click Next, you will see the terms and conditions. Select I accept check box and click Next. The installation will start immediately. For any reason, if you stop the installation and start it later, the product preparation tool will check whether a particular component is installed and if yes, then the installation of that particular component will be skipped. If you do not have internet connection, you will face the download error as follows. At any point of failure, the error log will be available for you to review. If all OK, you will reach the below dialog, this means some components will be installed once the PC is rebooted. Be noted that the clicking on finish will not ask you for further confirmation. So make sure to save all your work before clicking on finish button. Once the server is restarted, the product preparation tool will start automatically and you will see the following dialog. Now go to the SharePoint 2013 splash page and click on “Install SharePoint Server” link. You need to enter the product key here. Enter the product key as you received and click continue. Select the Checkbox for the license agreement and click on continue button. Now you need to select the installation type. Select Stand-alone and click on “Install Now” button. A dialog will pop up that updates you with the process and progress. The installation took around 15-20 minutes with 2 GB or Ram installed in the server, seems fair. Once the installation is over, you will see the following Dialog. Make sure you select the Run the products and configuration wizard. If you miss to select the check box, you can find the products and configuration wizard from the start tiles. The products and configuration wizard will start. If you get any dialog saying some of the services will be stopped, you just accept it. Since we selected standalone installation, it will not ask for any user input, as it already knows the database to be configured. Once the configuration is over without any problems you will see the configuration successful message. Also you can find the link to central administration on the Start Screen.     Troubleshooting During my first setup process, I got the below error. System.ArgumentException: The SDDL string contains an invalid sid or a sid that cannot be translated. Parameter name: sddlForm at System.Security.AccessControl.RawSecurityDescriptor.BinaryFormFromSddlForm(String sddlForm) at System.Security.AccessControl.RawSecurityDescriptor..ctor(String sddlForm) at Microsoft.SharePoint.Win32.SPNetApi32.CreateShareSecurityDescriptor(String[] readNames, String[] changeNames, String[] fullControlNames, String& sddl) at Microsoft.SharePoint.Win32.SPNetApi32.CreateFileShare(String name, String description, String path) at Microsoft.SharePoint.Administration.SPServer.CreateFileShare(String name, String description, String path) at Microsoft.Office.Server.Search.Administration.AnalyticsAdministration.CreateAnalyticsUNCShare(String dirParentLocation, String shareName) at Microsoft.Office.Server.Search.Administration.AnalyticsAdministration.ProvisionAnalyticsShare(SearchServiceApplication serviceApplication) ………………………………………… ………………………………………… The configuration wizard displayed the error as below. The error occurred in step 8 of the configuration wizard and by the time the central administration is already provisioned. So from the start, I was able to open the central administration website, but the search service application was showing as error. I found a good blog that specifies the reason for error. http://kbdump.com/sharepoint2013-standalone-config-error-create-sample-data/ The workaround specified in the blog works fine. I think SharePoint must be provisioning Search using the Network Service account, so instead of giving permission to everyone, you could try giving permission to Network Service account(I didn’t try this yet, buy you could try and post your feedback here). In production environment you will have specific accounts that have access rights as recommended by Microsoft guidelines. Installation of SharePoint 2013 is pretty straight forward. Hope you enjoyed the article!

    Read the article

  • update-manager crashes Ubuntu 12.04

    - by user205450
    The update-manager crashes with the following error frank@darkstar2:~$ update-manager Traceback (most recent call last): File "/usr/bin/update-manager", line 33, in <module> from UpdateManager.UpdateManager import UpdateManager File "/usr/lib/python2.7/dist-packages/UpdateManager/UpdateManager.py", line 72, in <module> from Core.MyCache import MyCache File "/usr/lib/python2.7/dist-packages/UpdateManager/Core/MyCache.py", line 34, in <module> import DistUpgrade.DistUpgradeCache File "/usr/lib/python2.7/dist-packages/DistUpgrade/DistUpgradeCache.py", line 60, in <module> KERNEL_INITRD_SIZE = _set_kernel_initrd_size() File "/usr/lib/python2.7/dist-packages/DistUpgrade/DistUpgradeCache.py", line 53, in _set_kernel_initrd_size size = estimate_kernel_size_in_boot() File "/usr/lib/python2.7/dist-packages/DistUpgrade/utils.py", line 74, in estimate_kernel_size_in_boot size += os.path.getsize(f) File "/usr/lib/python2.7/genericpath.py", line 49, in getsize return os.stat(filename).st_size OSError: [Errno 5] Input/output error: '/boot/abi-3.2.0-54-generic' I am not sure how to read the error but it seems there is some error in file size. How do I fix it.

    Read the article

  • &ldquo;My life at Oracle&rdquo;

    - by cristian.condurache(at)oracle.com
    Hello everybody! My name is Eva and I currently work in Oracle Italy as Sales Programs Manager for the Technology Sales organization. Since 2009, I also proudly represent the Oracle Education Foundation within my country as the Ambassador for Italy. My career path in this amazing company began 5 years ago as a fresh graduate: after various years studying abroad, in Germany and Ireland mainly, I was looking for a valuable and concrete opportunity which could fulfill my energetic spirit. I wanted to develop myself inside a stimulating and “fast” business environment.. and here came Oracle and I really couldn’t ask for anything better!  THE PARTNER EXPERIENCE The first department I had the chance to work into was the Alliances and Channels organization, where I had the opportunity to join a brilliant team of great and visionary guys. I began having the responsibility to analyze and rationalize the portfolio of Oracle business partners and to identify potential cross-area solutions, which had to be highlighted both on the local market and internationally: this ended up with the implementation of the “Partner Community” model, a business environment of selected Oracle partners, specialized on the different technology focus areas. This new concept was then recognized as an EMEA Best Practice and replicated internationally. Having the opportunity to strengthen day after day strategic relationships with several business partners and study the market positioning of their technology solutions, I was given the role to develop the “Oracle Partner Network Innovation Award” in Italy: the EMEA competition encouraging and rewarding proven and successful technology innovations, creating high value for our common customers and generating new business potential. Several Italian partner solutions won different prizes and I decided that it was worth collecting all those valuable projects, winners and short-listed, inside two specific books in order also to provide them an international market visibility: OPN Innovation Award Booklet 2007 and OPN Innovation Award Booklet 2008 Inside the Alliances and Channels department I really had the opportunity to do    amazing things, like for example working side-by-side with one of the most exceptional teams in Oracle I have ever worked with: the EMEA Recruitment Team. Together, in fact, we conceived a brand new business initiative for our partners, called “Oracle Campus Joint Program”. This program was awarded as an EMEA Best Practice and acknowledged by both Italian public institutions and press media. Italy   is currently running its 5th edition.   Briefly, the “Oracle Campus Joint Program” aims at facing the growing issue of lack of  technology competences and skills on the market. By identifying a specific technology area and developing an intensive 4-6 week Oracle University training course and by collaborating with important academic institutes, international “gurus” and professionals, our business partners are able to benefit from a pool of brilliant top talented young consultants and offer them a significant career opportunity. BUSINESS BUT NOT ONLY: THE NO-PROFIT EXPERIENCE OF ORACLE Currently my mission in Oracle is to continue driving the implementation of strategic business development and sales programs for the entire Oracle Technology stack, involving both partners and the end-customers. But as a completely distinguished role from the day-today business, I’m also honored to represent in Italy the charity global organization founded by Oracle - the Oracle Education Foundation - and drive its corporate citizenship and marketing programs. Oracle Education Foundation is an independent charitable organization funded by Oracle and is dedicated to helping students develop 21st century skills through project learning and the use of technology. It provides “ThinkQuest” as a free program to primary and secondary (K12) schools. Just some significant numbers: today 548,000 students/teachers in 47 countries use ThinkQuest and the Oracle Education Foundation partners with 40+ no-profit or government organizations globally. ABOUT MYSELF AND MY INTERESTS About myself…I’m very enthusiastic and positive, trying always to transform difficult issues in challenging opportunities. My day usually begins very early in the morning with running, swimming or when I need to collect some “zen” energies with a yoga session or better with a long walk with my dog. I definitely love animals and generally speaking I’m very keen on environmental issues and try, as much as I can, to carry out a healthy and “planet respectful” lifestyle. My thirst for knowledge pushed me some time ago to begin a new personal challenge: I decided to enroll, dedicating a good part of my free time, for a second university degree: I chose “Neuroeconomics”, an innovative academic path which combines psychology, economics, and neuroscience and studies how people make decisions and the role of the brain when people evaluate these decisions, categorizing risks and rewards and generally interacting with each other. I’ve been very glad to talk about my experience in this article, as working for Oracle is something very stimulating. This company ensures you the opportunity to face new challenges, work with highly talented people and be professionally highlighted also globally. Motivation, good results and innovation is always pursued, recognized and fully supported. Thanks and wish you all an amazing career! If you have any question please contact [email protected]. For our job opportunities, please look at http://campus.oracle.com.   Technorati Tags: EMEA,Oracle Partners,Oracle Campus,Oracle Education,experience,EMEA Recruitment Team

    Read the article

  • Browsing Your ADF Application Module Pooling Params with WLST

    - by Duncan Mills
    In ADF 11g you can of course use Enterprise Manager (EM) to browse and configure the settings used by ADF Business Components  Application Modules, as shown here for one of my sample deployed applications. This screen you can access from the EM homepage by pulling down the Application Deployment menu, and then ADF > Configure ADF Business Components. Then select the profile that you are actually using (Hint: look in the DataBindings.cpx file to work this out - probably the "Local" version unless you've explicitly changed it. )So, from this screen you can change the pooling parameters and the world is good. But what if you don't have EM installed? In that case you can use the WebLogic scripting capabilities to view (and Update) the MBean Properties. Explanation The pooling parameters and many others are handled through Message Driven Beans that are created for the deployed application in the server. In the case of the ADF BC pooling parameters, this MBean will combine the configuration deployed as part of the application, along with any overrides defined as -D environement commands on the JVM startup for the application server instance. Using WLST to Browse the Bean ValuesFor our purposes here I'm doing this interactively, although you can also write a script or write Java to achieve the same thing.Step 0: Before You Start You will need the followingAccess to the console on the machine that is running the serverThe WebLogic Admin username and password (I'll use weblogic/password as my example here - yours will be different)The name of the deployed application (in this example FMWdh_application1)The package path to the bc4j.xcfg file (in this example oracle.demo.fmwdh.model.service.common.bc4j.xcfg) This is based on the default path for your model project so it shoudl be fairly easy to work out.The BC configuration your AM is actually running with (look in the DataBindings.cpx for that. In this example DealHelpServiceDeployed is the profile being used..)Step 1: Start the WLST consoleTo start at the beginning, you need to run the WLST command but that needs a little setup:Change to the wlserver_10.3/server/bin directory e.g. under your Fusion Middleware Home[oracle@mymachine] cd /home/oracle/FMW_R1/wlserver_10.3/server/binSet your environment using the setWLSEnv script. e.g. on Oracle Enterprise Linux:[oracle@mymachine bin] source setWLSEnv.shStart the WLST interactive console[oracle@mymachine bin] java weblogic.WLSTInitializing WebLogic Scripting Tool (WLST) ...Welcome to WebLogic Server Administration Scripting ShellType help() for help on available commandswls:/offline> Step 2:Enter the WLST commandsConnect to the server wls:> connect('weblogic','password')Change to the Custom root, this is where the AMPooling MBeans are registered wls:> custom()Change to the b4j MBean directorywls:> cd ('oracle.bc4j.mbean.config')Work out the correct directory for the AM configuration you need. This is the difficult bit, not because it's hard to do, but because the names are long. The structure here is such that every child MBean is displayed at the same level as the parent, so for each deployed application there will be many directories shown. In fact, do an ls() command here and you'll see what I mean. Each application will have one MBean for the app as a whole, and then for each deployed configuration in the .xcfg file you'll see: One for the config entry itself, and then one each for Security, DB Connection and AM Pooling. So if you deploy an app with just one configuration you'll see 5 directories, if it has two configurations in the .xcfg you'll see 9 and so on.The directory you are looking for will contain those bits of information you gathered in Step 0, specifically the Application Name, the configuration you are using and the xcfg name: First of all narrow your list to just those directories returned from the ls() command that begin oracle.bc4j.mbean.config:name=AMPool. These identify the AM pooling MBeans for all the deployed applications. Now look for the correct application name e.g. Application=FMWdh_application1The config setting in that sub-list should already be correct and match what you expect e.g. oracle.bc4j.mbean.config=oracle.demo.fmwdh.model.service.common.bc4j.xcfgFinally look for the correct value for the AppModuleConfigType e.g. oracle.bc4j.mbean.config.AppModuleConfigType=DealHelpServiceDeployedNow you have identified the correct directory name, change to that (keep the name on one line of course - I've had to split it across lines here for clarity:wls:> cd ('oracle.bc4j.mbean.config:name=AMPool,     type=oracle.bc4j.mbean.config.AppModuleConfigType.AMPoolType,    oracle.bc4j.mbean.config=oracle.demo.fmwdh.model.service.common.bc4j.xcfg,    Application=FMWdh_application1,    oracle.bc4j.mbean.config.AppModuleConfigType=DealHelpServiceDeployed') Now you can actually view the parameter values with a simple ls() commandwls:> ls()And here's the output in which you can view the realtime values of the various pool settings: -rw- AmpoolConnectionstrategyclass oracle.jbo.common.ampool.DefaultConnectionStrategy -rw- AmpoolDoampooling true -rw- AmpoolDynamicjdbccredentials false -rw- AmpoolInitpoolsize 2 -rw- AmpoolIsuseexclusive true -rw- AmpoolMaxavailablesize 40 -rw- AmpoolMaxinactiveage 600000 -rw- AmpoolMaxpoolsize 4096 -rw- AmpoolMinavailablesize 2 -rw- AmpoolMonitorsleepinterval 600000 -rw- AmpoolResetnontransactionalstate true -rw- AmpoolSessioncookiefactoryclass oracle.jbo.common.ampool.DefaultSessionCookieFactory -rw- AmpoolTimetolive 3600000 -rw- AmpoolWritecookietoclient false -r-- ConfigMBean true -rw- ConnectionPoolManager oracle.jbo.server.ConnectionPoolManagerImpl -rw- Doconnectionpooling false -rw- Dofailover false -rw- Initpoolsize 0 -rw- Maxpoolcookieage -1 -rw- Maxpoolsize 4096 -rw- Poolmaxavailablesize 25 -rw- Poolmaxinactiveage 600000 -rw- Poolminavailablesize 5 -rw- Poolmonitorsleepinterval 600000 -rw- Poolrequesttimeout 30000 -rw- Pooltimetolive -1 -r-- ReadOnly false -rw- Recyclethreshold 10 -r-- RestartNeeded false -r-- SystemMBean false -r-- eventProvider true -r-- eventTypes java.lang.String[jmx.attribute.change] -r-- objectName oracle.bc4j.mbean.config:name=AMPool,type=oracle.bc4j.mbean.config.AppModuleConfigType.AMPoolType,oracle.bc4j.mbean.config=oracle.demo.fmwdh.model.service.common.bc4j.xcfg,Application=FMWdh_application1,oracle.bc4j.mbean.config.AppModuleConfigType=DealHelpServiceDeployed -rw- poolClassName oracle.jbo.common.ampool.ApplicationPoolImpl Thanks to Brian Fry on the JDeveloper PM Team who did most of the work to put this sequence of steps together with me badgering him over his shoulder.

    Read the article

  • "cannot open file userpref.blend@ for writing: Permission denied" in blender

    - by ganezdragon
    I'm using blender 2.69, installed via software centre, and when I save my user preference through File - User Preferences and click on "Save User Settings" there is a message "cannot open file /home/ganez/.config/blender/2.69/config/userpref.blend@ for writing permission denied" I have checked to the path /home/ganez/.config/blender/2.69/config/ and there is no userpref.blend file present. PS: I think this has something to do with file permission for that config folder and I have no idea on how to use the chmod command. So any advise? Thank you in advance.

    Read the article

  • "Virtual Machine Manager" and "Virtual Machine Server" setup manual

    - by urtihu
    Is there a manual available that covers the proper setup of a "Virtual Machine Server" with no GUI with an Ubuntu Workstation with a GUI and "Virtual Machine Manager" installed? Both are 12.04 version. I get the following error message: unable to connect to libvirt Verify that -The libvirt-bin package is installed -The libvirt daemon has been started -you are a member of the libvirtd group the package is installed for some reason starting the daemon seems to crash libvirtd start info: libvirt version 0.9.8 error: virExecWithHook:328 : cannot find 'pm-is-supported' in path: No such file or directory also qemucapsInit:856: Failed to get host power management capabilities So I guess I did not set the server up correctly. All manuals I found do not mention "Virtual Machine Manager". I only chose the packages to connect with SSH remotely and the "Virtual Machine Server" for the server installation. So I would like to find a manual that covers this combo or then covered only GUI machines that have both on the same machine, which will not really help with system performance as a hypervisor.

    Read the article

  • Installing gtk-config and/or fsv in Ubuntu 10.10

    - by Wayne Werner
    Hi, I'm trying to install the File System Visualizer (think "It's a UNIX System! I know this!" from Jurassic Park) on Ubuntu 10.10. I've got the .tar.gz downloaded, and extracted. However, when I ./configure, I get this output: loading cache ./config.cache checking for a BSD compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking whether make sets ${MAKE}... yes checking for working aclocal... found checking for working autoconf... found checking for working automake... found checking for working autoheader... found checking for working makeinfo... missing checking for gcc... gcc checking whether the C compiler (gcc ) works... yes checking whether the C compiler (gcc ) is a cross-compiler... no checking whether we are using GNU C... yes checking whether gcc accepts -g... yes checking how to run the C preprocessor... gcc -E checking for ranlib... ranlib checking for POSIXized ISC... no checking for dirent.h that defines DIR... yes checking for opendir in -ldir... no checking for ANSI C header files... yes checking whether time.h and sys/time.h may both be included... yes checking for strings.h... yes checking for sys/time.h... yes checking for unistd.h... yes checking for working const... yes checking for mode_t... yes checking for uid_t in sys/types.h... yes checking for pid_t... yes checking for size_t... yes checking for comparison_fn_t... yes checking for st_blocks in struct stat... yes checking whether struct tm is in sys/time.h or time.h... time.h checking for working alloca.h... yes checking for alloca... yes checking for working fnmatch... yes checking for strftime... yes checking for getcwd... yes checking for gettimeofday... yes checking for mktime... yes checking for strcspn... yes checking for strdup... yes checking for strspn... yes checking for strtod... yes checking for strtoul... yes checking for scandir... yes checking for inline... inline checking for off_t... yes checking for unistd.h... (cached) yes checking for getpagesize... yes checking for working mmap... yes checking for argz.h... yes checking for limits.h... yes checking for locale.h... yes checking for nl_types.h... yes checking for malloc.h... yes checking for string.h... yes checking for unistd.h... (cached) yes checking for sys/param.h... yes checking for getcwd... (cached) yes checking for munmap... yes checking for putenv... yes checking for setenv... yes checking for setlocale... yes checking for strchr... yes checking for strcasecmp... yes checking for strdup... (cached) yes checking for __argz_count... yes checking for __argz_stringify... yes checking for __argz_next... yes checking for stpcpy... yes checking for LC_MESSAGES... yes checking whether NLS is requested... yes checking whether included gettext is requested... no checking for libintl.h... yes checking for gettext in libc... yes checking for msgfmt... /usr/bin/msgfmt checking for dcgettext... yes checking for gmsgfmt... /usr/bin/msgfmt checking for xgettext... /usr/bin/xgettext checking for gtk-config... no checking for GTK - version >= 1.2.1... no *** The gtk-config script installed by GTK could not be found *** If GTK was installed in PREFIX, make sure PREFIX/bin is in *** your path, or set the GTK_CONFIG environment variable to the *** full path to gtk-config. configure: error: Cannot find proper GTK+ version Obviously it's looking for gtk-config. However, apparently it doesn't exist in the repos anymore. Then this post mentioned that gtkglarea solved their problem, as mentioned in this file. Of course that poster neatly forgets to mention exactly what and how gtkglarea solved their problem, and Google is mostly devoid of information on the problem. So I come here asking for help! I would like to install fsv, but it tells me gtk-config doesn't exist. How can I fix this problem in Ubuntu 10.10? Thanks!

    Read the article

  • Blender has weird fuzzy misbehaving icon in launcher

    - by TreefrogInc
    I got Blender 2.58 (stable) from blender.org (downloaded the package, extracted it). Everything works fine, but the only thing that bugs me is that the icon in the launcher (when I open it) is really fuzzy. I guess that's no big deal, but when I select "Keep in Launcher" and close Blender, the icon doesn't work when I try to open it again. Then, I created a Launcher on my desktop with the path to the blender executable, slapped on a shiny new non-fuzzy .svg icon that came with the package, and dragged the (desktop) Launcher to my (Unity) Launcher. Now, when I click on the new launcher, it keeps showing the flashing thing that says it's opening the program, and then the old fuzzy icon comes back but doesn't replace the new icon. Even after Blender opens, the new icon is still flashing. What is wrong with my launcher?

    Read the article

  • WLS MBeans

    - by Jani Rautiainen
    WLS provides a set of Managed Beans (MBeans) to configure, monitor and manage WLS resources. We can use the WLS MBeans to automate some of the tasks related to the configuration and maintenance of the WLS instance. The MBeans can be accessed a number of ways; using various UIs and programmatically using Java or WLST Python scripts.For customization development we can use the features to e.g. manage the deployed customization in MDS, control logging levels, automate deployment of dependent libraries etc. This article is an introduction on how to access and use the WLS MBeans. The goal is to illustrate the various access methods in a single article; the details of the features are left to the linked documentation.This article covers Windows based environment, steps for Linux would be similar however there would be some differences e.g. on how the file paths are defined. MBeansThe WLS MBeans can be categorized to runtime and configuration MBeans.The Runtime MBeans can be used to access the runtime information about the server and its resources. The data from runtime beans is only available while the server is running. The runtime beans can be used to e.g. check the state of the server or deployment.The Configuration MBeans contain information about the configuration of servers and resources. The configuration of the domain is stored in the config.xml file and the configuration MBeans can be used to access and modify the configuration data. For more information on the WLS MBeans refer to: Understanding WebLogic Server MBeans WLS MBean reference Java Management Extensions (JMX)We can use JMX APIs to access the WLS MBeans. This allows us to create Java programs to configure, monitor, and manage WLS resources. In order to use the WLS MBeans we need to add the following library into the class-path: WL_HOME\lib\wljmxclient.jar Connecting to a WLS MBean server The WLS MBeans are contained in a Mbean server, depending on the requirement we can connect to (MBean Server / JNDI Name): Domain Runtime MBean Server weblogic.management.mbeanservers.domainruntime Runtime MBean Server weblogic.management.mbeanservers.runtime Edit MBean Server weblogic.management.mbeanservers.edit To connect to the WLS MBean server first we need to create a map containing the credentials; Hashtable<String, String> param = new Hashtable<String, String>(); param.put(Context.SECURITY_PRINCIPAL, "weblogic");        param.put(Context.SECURITY_CREDENTIALS, "weblogic1");        param.put(JMXConnectorFactory.PROTOCOL_PROVIDER_PACKAGES, "weblogic.management.remote"); These define the user, password and package containing the protocol. Next we create the connection: JMXServiceURL serviceURL =     new JMXServiceURL("t3","127.0.0.1",7101,     "/jndi/weblogic.management.mbeanservers.domainruntime"); JMXConnector connector = JMXConnectorFactory.connect(serviceURL, param); MBeanServerConnection connection = connector.getMBeanServerConnection(); With the connection we can now access the MBeans for the WLS instance. For a complete example see Appendix A of this post. For more details refer to Accessing WebLogic Server MBeans with JMX Accessing WLS MBeans The WLS MBeans are structured hierarchically; in order to access content we need to know the path to the MBean we are interested in. The MBean is accessed using “MBeanServerConnection. getAttribute” API.  WLS provides entry points to the hierarchy allowing us to navigate all the WLS MBeans in the hierarchy (MBean Server / JMX object name): Domain Runtime MBean Server com.bea:Name=DomainRuntimeService,Type=weblogic.management.mbeanservers.domainruntime.DomainRuntimeServiceMBean Runtime MBean Servers com.bea:Name=RuntimeService,Type=weblogic.management.mbeanservers.runtime.RuntimeServiceMBean Edit MBean Server com.bea:Name=EditService,Type=weblogic.management.mbeanservers.edit.EditServiceMBean For example we can access the Domain Runtime MBean using: ObjectName service = new ObjectName( "com.bea:Name=DomainRuntimeService," + "Type=weblogic.management.mbeanservers.domainruntime.DomainRuntimeServiceMBean"); Same syntax works for any “child” WLS MBeans e.g. to find out all application deployments we can: ObjectName domainConfig = (ObjectName)connection.getAttribute(service,"DomainConfiguration"); ObjectName[] appDeployments = (ObjectName[])connection.getAttribute(domainConfig,"AppDeployments"); Alternatively we could access the same MBean using the full syntax: ObjectName domainConfig = new ObjectName("com.bea:Location=DefaultDomain,Name=DefaultDomain,Type=Domain"); ObjectName[] appDeployments = (ObjectName[])connection.getAttribute(domainConfig,"AppDeployments"); For more details refer to Accessing WebLogic Server MBeans with JMX Invoking operations on WLS MBeans The WLS MBean operations can be invoked with MBeanServerConnection. invoke API; in the following example we query the state of “AppsLoggerService” application: ObjectName appRuntimeStateRuntime = new ObjectName("com.bea:Name=AppRuntimeStateRuntime,Type=AppRuntimeStateRuntime"); Object[] parameters = { "AppsLoggerService", "DefaultServer" }; String[] signature = { "java.lang.String", "java.lang.String" }; String result = (String)connection.invoke(appRuntimeStateRuntime,"getCurrentState",parameters, signature); The result returned should be "STATE_ACTIVE" assuming the "AppsLoggerService" application is up and running. WebLogic Scripting Tool (WLST) The WebLogic Scripting Tool (WLST) is a command-line scripting environment that we can access the same WLS MBeans. The tool is located under: $MW_HOME\oracle_common\common\bin\wlst.bat Do note that there are several instances of the wlst script under the $MW_HOME, each of them works, however the commands available vary, so we want to use the one under “oracle_common”. The tool is started in offline mode. In offline mode we can access and manipulate the domain configuration. In online mode we can access the runtime information. We connect to the Administration Server : connect("weblogic","weblogic1", "t3://127.0.0.1:7101") In both online and offline modes we can navigate the WLS MBean using commands like "ls" to print content and "cd" to navigate between objects, for example: All the commands available can be obtained with: help('all') For details of the tool refer to WebLogic Scripting Tool and for the commands available WLST Command and Variable Reference. Also do note that the WLST tool can be invoked from Java code in Embedded Mode. Running Scripts The WLST tool allows us to automate tasks using Python scripts in Script Mode. The script can be manually created or recorded by the WLST tool. Example commands of recording a script: startRecording("c:/temp/recording.py") <commands that we want to record> stopRecording() We can run the script from WLST: execfile("c:/temp/recording.py") We can also run the script from the command line: C:\apps\Oracle\Middleware\oracle_common\common\bin\wlst.cmd c:/temp/recording.py There are various sample scripts are provided with the WLS instance. UI to Access the WLS MBeans There are various UIs through which we can access the WLS MBeans. Oracle Enterprise Manager Fusion Middleware Control Oracle WebLogic Server Administration Console Fusion Middleware Control MBean Browser In the integrated JDeveloper environment only the Oracle WebLogic Server Administration Console is available to us. For more information refer to the documentation, one noteworthy feature in the console is the ability to record WLST scripts based on the navigation. In addition to the UIs above the JConsole included in the JDK can be used to access the WLS MBeans. The JConsole needs to be started with specific parameter to force WLS objects to be used and jar files in the classpath: "C:\apps\Oracle\Middleware\jdk160_24\bin\jconsole" -J-Djava.class.path=C:\apps\Oracle\Middleware\jdk160_24\lib\jconsole.jar;C:\apps\Oracle\Middleware\jdk160_24\lib\tools.jar;C:\apps\Oracle\Middleware\wlserver_10.3\server\lib\wljmxclient.jar -J-Djmx.remote.protocol.provider.pkgs=weblogic.management.remote For more details refer to the Accessing Custom MBeans from JConsole. Summary In this article we have covered various ways we can access and use the WLS MBeans in context of integrated WLS in JDeveloper to be used for Fusion Application customization development. References Developing Custom Management Utilities With JMX for Oracle WebLogic Server Accessing WebLogic Server MBeans with JMX WebLogic Server MBean Reference WebLogic Scripting Tool WLST Command and Variable Reference Appendix A package oracle.apps.test; import java.io.IOException;import java.net.MalformedURLException;import java.util.Hashtable;import javax.management.MBeanServerConnection;import javax.management.MalformedObjectNameException;import javax.management.ObjectName;import javax.management.remote.JMXConnector;import javax.management.remote.JMXConnectorFactory;import javax.management.remote.JMXServiceURL;import javax.naming.Context;/** * This class contains simple examples on how to access WLS MBeans using JMX. */public class BlogExample {    /**     * Connection to the WLS MBeans     */    private MBeanServerConnection connection;    /**     * Constructor that takes in the connection information for the      * domain and obtains the resources from WLS MBeans using JMX.     * @param hostName host name to connect to for the WLS server     * @param port port to connect to for the WLS server     * @param userName user name to connect to for the WLS server     * @param password password to connect to for the WLS server     */    public BlogExample(String hostName, String port, String userName,                       String password) {        super();        try {            initConnection(hostName, port, userName, password);        } catch (Exception e) {            throw new RuntimeException("Unable to connect to the domain " +                                       hostName + ":" + port);        }    }    /**     * Default constructor.     * Tries to create connection with default values. Runtime exception will be     * thrown if the default values are not used in the local instance.     */    public BlogExample() {        this("127.0.0.1", "7101", "weblogic", "weblogic1");    }    /**     * Initializes the JMX connection to the WLS Beans     * @param hostName host name to connect to for the WLS server     * @param port port to connect to for the WLS server     * @param userName user name to connect to for the WLS server     * @param password password to connect to for the WLS server     * @throws IOException error connecting to the WLS MBeans     * @throws MalformedURLException error connecting to the WLS MBeans     * @throws MalformedObjectNameException error connecting to the WLS MBeans     */    private void initConnection(String hostName, String port, String userName,                                String password)                                 throws IOException, MalformedURLException,                                        MalformedObjectNameException {        String protocol = "t3";        String jndiroot = "/jndi/";        String mserver = "weblogic.management.mbeanservers.domainruntime";        JMXServiceURL serviceURL =            new JMXServiceURL(protocol, hostName, Integer.valueOf(port),                              jndiroot + mserver);        Hashtable<String, String> h = new Hashtable<String, String>();        h.put(Context.SECURITY_PRINCIPAL, userName);        h.put(Context.SECURITY_CREDENTIALS, password);        h.put(JMXConnectorFactory.PROTOCOL_PROVIDER_PACKAGES,              "weblogic.management.remote");        JMXConnector connector = JMXConnectorFactory.connect(serviceURL, h);        connection = connector.getMBeanServerConnection();    }    /**     * Main method used to invoke the logic for testing     * @param args arguments passed to the program     */    public static void main(String[] args) {        BlogExample blogExample = new BlogExample();        blogExample.testEntryPoint();        blogExample.testDirectAccess();        blogExample.testInvokeOperation();    }    /**     * Example of using an entry point to navigate the WLS MBean hierarchy.     */    public void testEntryPoint() {        try {            System.out.println("testEntryPoint");            ObjectName service =             new ObjectName("com.bea:Name=DomainRuntimeService,Type=" +"weblogic.management.mbeanservers.domainruntime.DomainRuntimeServiceMBean");            ObjectName domainConfig =                (ObjectName)connection.getAttribute(service,                                                    "DomainConfiguration");            ObjectName[] appDeployments =                (ObjectName[])connection.getAttribute(domainConfig,                                                      "AppDeployments");            for (ObjectName appDeployment : appDeployments) {                String resourceIdentifier =                    (String)connection.getAttribute(appDeployment,                                                    "SourcePath");                System.out.println(resourceIdentifier);            }        } catch (Exception e) {            throw new RuntimeException(e);        }    }    /**     * Example of accessing WLS MBean directly with a full reference.     * This does the same thing as testEntryPoint in slightly difference way.     */    public void testDirectAccess() {        try {            System.out.println("testDirectAccess");            ObjectName appDeployment =                new ObjectName("com.bea:Location=DefaultDomain,"+                               "Name=AppsLoggerService,Type=AppDeployment");            String resourceIdentifier =                (String)connection.getAttribute(appDeployment, "SourcePath");            System.out.println(resourceIdentifier);        } catch (Exception e) {            throw new RuntimeException(e);        }    }    /**     * Example of invoking operation on a WLS MBean.     */    public void testInvokeOperation() {        try {            System.out.println("testInvokeOperation");            ObjectName appRuntimeStateRuntime =                new ObjectName("com.bea:Name=AppRuntimeStateRuntime,"+                               "Type=AppRuntimeStateRuntime");            String identifier = "AppsLoggerService";            String serverName = "DefaultServer";            Object[] parameters = { identifier, serverName };            String[] signature = { "java.lang.String", "java.lang.String" };            String result =                (String)connection.invoke(appRuntimeStateRuntime, "getCurrentState",                                          parameters, signature);            System.out.println("State of " + identifier + " = " + result);        } catch (Exception e) {            throw new RuntimeException(e);        }    }}

    Read the article

  • OPN Oracle ECM 10g R3 Implementation Boot Camp - (12-14/Abr/10)

    - by Claudia Costa
    É com entusiasmo que lhe anunciamos o bootcamp de Oracle ECM 10g R3 Implementation que irá realizar nos dias 12-14 de Abril  que abordará os tópicos abaixo descritos. Com o objectivo de ajudar os parceiros a desenvolver competências, a Oracle University e a Oracle Alliances&Channel, desenharam este bootcamp, compactando os conteúdos e reduzindo assim os custos. Preço por participante (3 dias) - 1.250 Eur + Iva  Oracle offers the most unified, usable enterprise content management platform in today's market. With centralized control across single or multiple repositories, common core functionality, and easily scalable content management capabilities, Oracle provides content management solutions for many content types and users-wherever they work in the enterprise.   The Oracle Enterprise Content Management (ECM) Implementation Boot Camp examines the fundamental concepts, techniques, and architecture of Oracle's ECM technologies. Join this training to learn how you can manage and maintain unstructured content   Target Audience:  The Oracle ECM Implementation Boot Camp is designed for architects, technical consultants, team/project leaders and functional consultants of our system integrator partners who want to ramp-up on ECM technology.   Contents:  The ECM Implementation Boot Camp is a three-day hands-on workshop, designed for Oracle Partners who are new to ECM, and will provide implementation instruction on the ECM technology offered by Oracle. The boot camp will: • Provide hands-on experience in implementing Oracle's truly unified, open and standard base ECM technology • Provide the strategic direction about Oracle's Fusion Middleware/Enterprise 2.0 and its role in composite application development • Expose broad set of Oracle's ECM technologies.   Objectives: The Oracle ECM Implementation Boot Camp is primarily focused on the Oracle's ECM offering to manage and maintain unstructured content and covers Universal Content Management (UCM), Image and Process Management (IPM), Universal Records Management (URM), and Information Rights Management (IRM):   Topics Covered • Introduction to Oracle UCM o UCM Overview o UCM Architecture Overview • Content Server and Document Management basics o Installation and Administration Skills § User and Security Admin § Configuration (metadata, DCLs, profiles, rules, etc.) § Workflow Admin § System Properties and Component Manager § Managing Subscriptions o Contributing Content § Browser form § WebDAV folder § Desktop Integration o Searching • Web Content Management o Site Studio • Universal Records Management • Information Right Management (IRM) • Image & Process Management (IPM) • Oracle Document Capture • Oracle eMail Archive Service. Labs • Content Server Installation • Use and Administration of Content Server • Introduction to Site Studio • Use and Administration of Records Manager Demo: The R&D Group and the New Patent Focus: Information Rights Management, Knowledge Management, Accounts Payable Image Automation, Imaging and Process Management Case Study Use Case 1: Enable City of Xalco to streamline internal processes by empowering city employees to quickly and efficiently manage and publish information on their employee intranet and eventually public Web site. Use Case 2: Help Acme & Co in archiving its goal is to become "paperless" by managing all of their company's business content in a central, Web-based repository. Acme's business content ranges from policies and procedures to Employee listings and marketing materials.   Agenda: Day 1 ·         ECM Overview & Content Server ·         ECM Overview ·         ECM Architecture and Installation ·         UCM and Digital Asset Management DEMO ·         Lab 1 - Content Server Installation ·         Lab 2 - Use and Administration of Content Server   Day 2 ·         Web Content Management ·         Lab 2 - Use and Administration of Content ·         Server (continued) ·         Introduction to Web Content Management ·         Lab 3 - Site Studio   Day 3 ·         URM/IRM/IPM ·         Introduction to Universal Records Management ·         Lab 4 - URM ·         Introduction to Information Rights Management ·         Information Rights Management DEMO ·         Introduction to Image and Process Management ·         Image and Process Management Demo ·         Oracle Document Capture ·         Oracle eMail Archive   Material needed for Bootcamp: This Boot camp requires attendees to provide their own laptops for this class. Attendee laptops must meet the following minimum hardware/software requirements: Hardware • RAM: 2GB RM minimum (1 GB RAM is not enough) • HDD: 15 GB free HDD space   Pre requistes: To ensure a valuable learning experience, participation in this boot camp requires completing the prerequisite courses and successfully passing the prerequisite assessment test that is mapped into the Oracle Enterprise Content Management Implementation Boot Camp guided learning path. At a minimum, participants with equivalent skills and background should review the guided learning path and successfully pass the prerequisite assessment test to ensure they possess the background necessary to benefit from participation in the boot Camp.   ---------------------------------------------------------------------   Para mais informações/inscrições, contacte: Mónica Pires  21 423 51 44 Horário e Local 9:30h - 12:30h e 14:00h - 17:00 ( 6 horas/dia )Oracle, Porto Salvo - Oeiras.

    Read the article

  • CodePlex Daily Summary for Friday, April 16, 2010

    CodePlex Daily Summary for Friday, April 16, 2010New Projects[C#] UML Touch: This project is my master thesis project for Msc Software Engineering at Oxford Brookes univesity. The goal of this project is to develop a UML Edi...3D Chart Reports, using ASP.NET 2.0: Project focuses on the visibility of internal supply chain management process. The objective is materialized, using MSChart and hence provides asse...Active Directory Health: ActiveDirectoryHealth (ADHealth) Aim: To create a suite of tools to allow an Active Directory administrator monitor and identify problems, particu...Art4Desktop: Aplicação desktop simplesAuto Increment field for any entity in MS CRM: Auto Increment field for any entity in MS CRM will add an attribute in any entity which will be auto increment type.Convection Game Engine (Basic Edition): A basic 2D game engine written in C# using XNA 3.1CSharp Intellisense: VS 2010 IntelliSense plug-in Provides custom IntelliSense for the CSharp Editor that is capable to filter out events, preoperties or methods. ...EP: Generic platform for enterprise applications developed on .NETFile Change Checker: A simple windows form application that checks and copies all modified files given a specified date. It aims to help developers that use Visual Stu...Folder: Folder is a puzzle platformer game, which provides you a whole new experience. You can fold objects in game play; a wall becomes a road, a high sti...Industrial Dashboard: WCF service that allows executing SQL Server stored procedures straight from javascript code, enabling sending and receiving structured data withou...kafrilion: Open Source Platform GameLogikBug.Injection: LogikBug.Injection is an IOC container and is very light weight. It is very simple to use and yet very roboust, there are many options and ways to ...MongoMvc: NoSql with MongoDB, NoRM and ASP.NET MVC. For more details check out the blog post http://weblogs.asp.net/shijuvarghese/archive/2010/04/16/nosql-wi...MOSSDAL: MOSS Data Access Layer for data from the Sharepoint Lists Service: MOSSDAL is a lightweight framework for working with Sharepoint MOSS List data using the List web Service. It can be used with silverlight or regul...Should: This is a set of test framework agnostic extensions assetion extensions. This project was born because test runners Should be independent of the...Software Localization Tool: summaryTIMETABLEASY: planning managerTR9: implementions of t9 technologies from mobile device to personal computer and laptopTrp net tools: net toolTwep: Twep is a JavaScript lib.Using PowerPivot to analyze MS Dynamics NAV: Project show how to prepare MS Dynamics NAV data for analyzing in PowerPivot for Excel. Project include Data Warehouse demo database, sql procedure...vmware virtual mashines management system for education: System for management many classes with PC and VMware virtual mashines (bad english,sorry)WebSite: Mr. John's projectsWebtrends DX and DC Services Libraries: This project is setup to provide assemblies to simplify access to the REST based DX and DC Web Services that Webtrends Provides. For access you ne...YetAnotherFileRenamer: Searches a directory and/or sub-directories for files matching a certain extension and copies and renames them to the same folder. The intent is a...New Releases3D Chart Reports, using ASP.NET 2.0: Beta iscm1.0.0.1: Its first release, might be a tolerant to bugs.A Guide to Parallel Programming: Drop 2 - Guide Chapters 2 and 5, accompanying code: This is Drop 2 with just the Guide Chapters 2 and 5 and the accompanying code samples. This drop requires Visual Studio 2010 Beta 2 or later in ord...AnimeJ: AnimeJ 1.1: This new release features reversible animations. It is fully backward compatible, and by simply specifying an additional parameter to Run() you can...AutoPoco: AutoPoco 0.4: A load of additions to the convention system Inheritance precedence added And a pile of extra data sourcesCSharp Intellisense: V1.5: CSharpIntellisensePresenter(Free) Created by: Bnaya Eshet (Bnaya Eshet (credit to Karl Shifflett)) Last Updated: Tuesday, April 13, 2010 Version...DevTreks -social budgeting that improves lives and livelihoods: Social Budgeting Web Software, DevTreks alpha 4: Alpha 4 upgrades all story-telling with one very basic, simple, 'story' data pattern, schema, and stylesheet. Basic, simple, stories and eBook pa...ESPEHA: Espeha 4.3: Deletion of categories and tasks via F8 Saving on CtrlS, CtrlShiftS Navigating to search on CtrlF Hiding on CtrlQ, CtrlX, CtrlH, Q, X, H + Drag ...Folder Bookmarks: Folder Bookmarks 1.4.4: This is the latest version of Folder Bookmarks (1.4.4), with general improvements. It has an installer - it will create a directory 'CPascoe' in My...HouseFly controls: HouseFly controls alpha 0.9.1: Version 0.9.1 alpha release of HouseFly controlsIndustrial Dashboard: 2.0 Beta: 2.0 Beta includes : IndustrialDashboard Framework Sample widgets : TabularReport, DropDownPicker, DatePicker, ScopePicker Sample html pageskdar: KDAR 0.0.20: KDAR - Kernel Debugger Anti Rootkit - signature's bases updated - many bugs fixedLogikBug.Injection: Initial Release: This project is dependent upon Microsoft.Practices.ServiceLocation.IServiceLocator and must be referenced when referencing LogikBug.Injection.Mesopotamia Experiment: Mesopotamia 1.2.72: New Features - Select organims to load in sim or in hw mode Fixes - fixed robotics engine not shutting down properly - selects new robotics port o...MobySharp: MobySharp 1.1: Fixed GetComments Added the new Likes featureMongoMvc: NoSQL with MongoDB, NoRM and ASP.NET MVC: A demo on NoSQL with MongoDB, NoRM and ASP.NET MVC. To run the demo application do the ollowing actions. 1. Create a directory call C:\data\...NetMassDownloader: Release 1.6.0.0: Mass Downloader For .Net Framework which allows you do download .Net Framework source code all at once.Offline debugging for VS2005 , VS2008 , Code...Nito.KitchenSink: Version 4: Features added in this release: PDBs are source-indexed to CodePlex, so it should be possible to step through the code (with on-demand downloads) w...Nito.KitchenSink: Version 5: The only feature for this release is support of the .NET 4.0 platform. Dependencies Nito.Linq 0.4 Beta (released 2010-04-15) Rx 1.0.2441.0 (rele...Nito.LINQ: Beta (v0.4): Rx version The "with Rx" versions of Nito.LINQ are built against Rx 1.0.2441.0, released 2010-04-15. Breaking changes IEnumerable<T>.Min and IEnum...N-Twill Twitter Client for VB.NET: NTWILL PROJECT 15-abril-2010: Este archivo es un Zip que contiene el proyecto hasta este punto editado... tiene algunas que otras funciones, pero lo cuelgo para tener un referen...PanBrowser: 1.2.0: added screensaver support, hit a key to exit, up/down keys cycle through images.PersianDateTimePicker, PersianMonthCalendar: PersianDatetimePicker,PersianMonthCalendar: PersianDatetimePicker,PersianMonthCalendar 1.1 releasesPokeIn Comet Ajax Library: PokeIn Library v03: New Features! Client MessageLog Management Added CrossBrowser Script Injector Added to work with non ajax supported browsers and activeX disabled...PokeIn Comet Ajax Library: PokeIn Sample VS09: Sample Project of PokeIn VS09. Contains version 0.3 of LibraryPokeIn Comet Ajax Library: PokeIn Sample VS10: Sample Visual Studio 10 project for PokeIn. Contains v03 Library of PokeInSilverlight Toolkit: April 2010: Suggestions? Features? Questions? Ask questions in our Silverlight Toolkit forum on Silverlight.net. The forum is the best resource for Silverlig...Software Localization Tool: SharpSLT 0.9: This is the first release of SharpSLT. Features Framework: .Net 3.5 UI language: English Works as standalone and External Tool for Visual C# Ex...Star Trooper for XNA 2D Tutorial: Lesson two content: Here is Lesson two original content for the StarTrooper 2D XNA tutorial. The blog tutorial has now started over on http://xna-uk.net/blogs/darkgen...StyleCop for ReSharper: StyleCop for ReSharper 5.0.14714.1: StyleCop for ReSharper 5.0.14714.1 o StyleCop 4.3.3.0 o ReSharper 5.0.1659.36 o Visual Studio 2008 / Visual Studio 2010 Installation Instructi...TaskUnZip for SSIS: TaskUnZip for SSIS 1.1.1.0 (beta 2): Ver. 1.1.1.0 (beta2) Bug: Correct installation bug. Add: Support SQL SERVER 2008 / 2005 Minor corrections Ver. 1.1.0.0 (Tnx to: Kevin Wendler)...TaskUnZip for SSIS: TaskUnZip for SSIS 1.2.0.0: Ver. 1.2.0.0 Add: Support SQL SERVER 2008 / 2005. Add: recursive compress.* Add: filter option for extract e compress file.* Add: Test archiv...Test Project (ignore): abcde: aaaaadaTR9: TR9 Beta: this setup is first release of the program.TR9: TR9 Source Code: TR9 Source CodeUsing PowerPivot to analyze MS Dynamics NAV: GL sql procedures and demo DB: For using sql procedures and creating DW database, please see Documentation.VivoSocial: VivoSocial 7.1.1: Version 7.1.1 of VivoSocial has been released. If you experienced any issues with the previous version, please update your modules to the 7.1.1 rel...WPF Inspirational Quote Management System: Release 1.1.1: - Changed to only allow one running instance a time. - Custom icon in system tray popup removed for now as this breaks when text size is set to gr...WPF ShaderEffect Generator: WPF ShaderEffect Generator 1.6.1: Just a minor release to fix the problem with resource Uri generation in the C# Shadereffect files. ChangesThe Uri should be correctly generated no...Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)ASP.NETMicrosoft SQL Server Community & SamplesPHPExcelpatterns & practices – Enterprise LibraryMost Active ProjectsRawrpatterns & practices – Enterprise LibraryGMap.NET - Great Maps for Windows Forms & PresentationIndustrial DashboardFarseer Physics EngineIonics Isapi Rewrite FilterNB_Store - Free DotNetNuke Ecommerce Catalog ModuleBlogEngine.NETjQuery Library for SharePoint Web ServicesDotRas

    Read the article

  • How to Automate your Database Documentation

    - by Jonathan Hickford
    In my previous post, “Automating Deployments with SQL Compare command line” I looked at how teams can automate the deployment and post deployment validation of SQL Server databases using the command line versions of Red Gate tools. In this post I’m looking at another use for the command line tools, namely using them to generate up-to-date documentation with every database change. There are many reasons why up-to-date documentation is valuable. For example when somebody new has to work on or administer a database for the first time, or when a new database comes into service. Having database documentation reduces the risks of making incorrect decisions when making changes. Documentation is very useful to business intelligence analysts when writing reports, for example in SSRS. There are a couple of great examples talking about why up to date documentation is valuable on this site:  Database Documentation – Lands of Trolls: Why and How? and Database Documentation Using SQL Doc. The short answer is that it can save you time and reduce risk when you need that most! SQL Doc is a fast simple tool that automatically generates database documentation. It can create documents in HTML, Word or pdf files. The documentation contains information about object definitions and dependencies, along with any other information you want to associate with each object. The SQL Doc GUI, which is included in Red Gate’s SQL Developer Bundle and SQL Toolbelt, allows you to add additional notes to objects, and customise which objects are shown in the docs.  These settings can be saved as a .sqldoc project file. The SQL Doc command line can use this project file to automatically update the documentation every time the database is changed, ensuring that documentation that is always up to date. The simplest way to keep documentation up to date is probably to use a scheduled task to run a script every day. However if you have a source controlled database, or are using a Continuous Integration (CI) server or a build server, it may make more sense to use that instead. If  you’re using SQL Source Control or SSDT Database Projects to help version control your database, you can automatically update the documentation after each change is made to the source control repository that contains your database. To get this automation in place,  you can use the functionality of a Continuous Integration (CI) server, which can trigger commands to run when a source control repository has changed. A CI server will also capture and save the documentation that is created as an artifact, so you can always find the exact documentation for a specific version of the database. This forms an always up to date data dictionary. If you don’t already have a CI server in place there are several you can use, such as the free open source Jenkins or the free starter editions of TeamCity. I won’t cover setting these up in this article, but there is information about using CI servers for automating database tasks on the Red Gate Database Delivery webpage. You may be interested in Red Gate’s SQL CI utility (part of the SQL Automation Pack) which is an easy way to update a database with the latest changes from source control. The PowerShell example below shows how to create the documentation from a database. That database might be your integration database or a shared development database that is always up to date with the latest changes. $serverName = "server\instance" $databaseName = "databaseName" # If you want to document multiple databases use a comma separated list $userName = "username" $password = "password" # Path to SQLDoc.exe $SQLDocPath = "C:\Program Files (x86)\Red Gate\SQL Doc 3\SQLDoc.exe" $arguments = @( "/server:$($serverName)", "/database:$($databaseName)", "/username:$($userName)", "/password:$($password)", "/filetype:html", "/outputfolder:.", # "/project:$args[0]", # If you already have a .sqldoc project file you can pass it as an argument to this script. Values in the project will be overridden with any options set on the command line "/name:$databaseName Report", "/copyrightauthor:$([Environment]::UserName)" ) write-host $arguments & $SQLDocPath $arguments There are several options you can set on the command line to vary how your documentation is created. For example, you can document multiple databases or exclude certain types of objects. In the example above, we set the name of the report to match the database name, and use the current Windows user as the documentation author. For more examples of how you can customise the report from the command line please see the SQL Doc command line documentation If you already have a .sqldoc project file, or wish to further customise the report by including or excluding specific objects, you can use this project on the command line. Any settings you specify on the command line will override the defaults in the project. For details of what you can customise in the project please see the SQL Doc project documentation. In the example above, the line to use a project is commented out, but you can uncomment this line and then pass a path to a .sqldoc project file as an argument to this script.  Conclusion Keeping documentation about your databases up to date is very easy to set up using SQL Doc and PowerShell. By using a CI server to run this process you can trigger the documentation to be run on every change to a source controlled database, and keep historic documentation available. If you are considering more advanced database automation, e.g. database unit testing, change script generation, deploying to large numbers of targets and backup/verification, please email me at [email protected] for further script samples or if you have any questions.

    Read the article

  • Adding additional users to Ubuntu server and configuring Samba

    - by Ben
    I have installed Ubuntu Server 12.10 and during the install created a user for myself ben, I now wish to add a second user bill. I have an external drive that I have mounted to /media/storage and created a shared folder called Share, the owner of the folder is ben:ben, how do I grant bill access to the folder? I don't want to put him in the group ben. Once setup, I need to configure Samba & NFS, here is my Samba configuration [Share] path = /media/storage/Share read only = no public = yes writeable = yes force user = ben How do I give both bill and ben access to the share via Samba? Thank you.

    Read the article

  • VBoxManage modifyhd --resize doesn't exist?

    - by George Korac
    I'm trying to increase the size of a VirtualBox Win7 .vdi disk on Ubuntu 10.04 but when I try executing VBoxManage modifyhd /path/disk.vdi --resize 15360 it returns Syntax error: unknown option: --resize. I'm unsure as to why this is happening because I've used it before and it's still listed under valid options for VBoxManage modifyhd in the VirualBox User Manual . Cheers, George @maniat1k george@george-laptop:~$ VBoxManage modifyhd '/home/george/.VirtualBox/HardDisks/Windows 7 64bit.vdi' --resize 15360 Sun VirtualBox Command Line Management Interface Version 3.1.6_OSE (C) 2005-2010 Sun Microsystems, Inc. All rights reserved. Usage: VBoxManage modifyhd | [--type normal|writethrough|immutable] [--autoreset on|off] [--compact] Syntax error: unknown option: --resize

    Read the article

< Previous Page | 402 403 404 405 406 407 408 409 410 411 412 413  | Next Page >