Search Results

Search found 2619 results on 105 pages for 'marcus 33'.

Page 72/105 | < Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >

  • Protecting Apache with Fail2Ban

    - by NetStudent
    Having checked my Apache logs for the last two days I have noticed several attempts to access URLs such as /phpmyadmin, /phpldapadmin: 121.14.241.135 - - [09/Jun/2012:04:37:35 +0100] "GET /w00tw00t.at.blackhats.romanian.anti-sec:) HTTP/1.1" 404 415 "-" "ZmEu" 121.14.241.135 - - [09/Jun/2012:04:37:35 +0100] "GET /phpMyAdmin/scripts/setup.php HTTP/1.1" 404 405 "-" "ZmEu" 121.14.241.135 - - [09/Jun/2012:04:37:35 +0100] "GET /phpmyadmin/scripts/setup.php HTTP/1.1" 404 404 "-" "ZmEu" 121.14.241.135 - - [09/Jun/2012:04:37:36 +0100] "GET /pma/scripts/setup.php HTTP/1.1" 404 399 "-" "ZmEu" 121.14.241.135 - - [09/Jun/2012:04:37:36 +0100] "GET /myadmin/scripts/setup.php HTTP/1.1" 404 403 "-" "ZmEu" 121.14.241.135 - - [09/Jun/2012:04:37:37 +0100] "GET /MyAdmin/scripts/setup.php HTTP/1.1" 404 403 "-" "ZmEu" 66.249.72.235 - - [09/Jun/2012:07:11:06 +0100] "GET /robots.txt HTTP/1.1" 404 430 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" 66.249.72.235 - - [09/Jun/2012:07:11:06 +0100] "GET / HTTP/1.1" 200 424 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" 188.132.178.34 - - [09/Jun/2012:08:39:05 +0100] "HEAD /manager/html HTTP/1.0" 404 166 "-" "-" 95.108.150.235 - - [09/Jun/2012:09:42:09 +0100] "GET /robots.txt HTTP/1.1" 404 432 "-" "Mozilla/5.0 (compatible; YandexBot/3.0; +http://yandex.com/bots)" 95.108.150.235 - - [09/Jun/2012:09:42:09 +0100] "GET /robots.txt HTTP/1.1" 404 432 "-" "Mozilla/5.0 (compatible; YandexBot/3.0; +http://yandex.com/bots)" 95.108.150.235 - - [09/Jun/2012:09:42:10 +0100] "GET / HTTP/1.1" 200 424 "-" "Mozilla/5.0 (compatible; YandexBot/3.0; +http://yandex.com/bots)" 95.108.150.235 - - [09/Jun/2012:09:42:10 +0100] "GET / HTTP/1.1" 200 424 "-" "Mozilla/5.0 (compatible; YandexBot/3.0; +http://yandex.com/bots)" 95.108.150.235 - - [09/Jun/2012:09:42:11 +0100] "GET / HTTP/1.1" 200 424 "-" "Mozilla/5.0 (compatible; YandexBot/3.0; +http://yandex.com/bots)" 95.108.150.235 - - [09/Jun/2012:09:42:11 +0100] "GET / HTTP/1.1" 200 424 "-" "Mozilla/5.0 (compatible; YandexBot/3.0; +http://yandex.com/bots)" 194.128.132.2 - - [09/Jun/2012:16:04:41 +0100] "HEAD / HTTP/1.0" 200 260 "-" "-" 66.249.68.176 - - [09/Jun/2012:18:08:12 +0100] "GET /robots.txt HTTP/1.1" 404 430 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" 66.249.68.176 - - [09/Jun/2012:18:08:13 +0100] "GET / HTTP/1.1" 200 424 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" 212.3.106.249 - - [09/Jun/2012:18:12:33 +0100] "GET / HTTP/1.1" 200 388 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:34 +0100] "GET /phpldapadmin/ HTTP/1.1" 404 379 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:34 +0100] "GET /phpldapadmin/htdocs/ HTTP/1.1" 404 386 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:35 +0100] "GET /phpldap/ HTTP/1.1" 404 374 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:36 +0100] "GET /phpldap/htdocs/ HTTP/1.1" 404 381 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:36 +0100] "GET /admin/ HTTP/1.1" 404 372 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:38 +0100] "GET /admin/ldap/ HTTP/1.1" 404 377 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:38 +0100] "GET /admin/ldap/htdocs/ HTTP/1.1" 404 384 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:38 +0100] "GET /admin/phpldap/ HTTP/1.1" 404 380 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:39 +0100] "GET /admin/phpldap/htdocs/ HTTP/1.1" 404 387 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:39 +0100] "GET /admin/phpldapadmin/htdocs/ HTTP/1.1" 404 392 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:40 +0100] "GET /admin/phpldapadmin/ HTTP/1.1" 404 385 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:40 +0100] "GET /openldap HTTP/1.1" 404 374 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:41 +0100] "GET /openldap/htdocs HTTP/1.1" 404 381 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:42 +0100] "GET /openldap/htdocs/ HTTP/1.1" 404 382 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:44 +0100] "GET /ldap/ HTTP/1.1" 404 371 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:44 +0100] "GET /ldap/htdocs/ HTTP/1.1" 404 378 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:45 +0100] "GET /ldap/phpldapadmin/ HTTP/1.1" 404 384 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:46 +0100] "GET /ldap/phpldapadmin/htdocs/ HTTP/1.1" 404 391 "-" "-" Is there any way I can use Fail2Ban or any other similar software to ban these IPs in situations when my server is being abused this way (by trying several "common" URLs)?

    Read the article

  • Oracle Announces Oracle Exadata X3 Database In-Memory Machine

    - by jgelhaus
    Fourth Generation Exadata X3 Systems are Ideal for High-End OLTP, Large Data Warehouses, and Database Clouds; Eighth-Rack Configuration Offers New Low-Cost Entry Point ORACLE OPENWORLD, SAN FRANCISCO – October 1, 2012 News Facts During his opening keynote address at Oracle OpenWorld, Oracle CEO, Larry Ellison announced the Oracle Exadata X3 Database In-Memory Machine - the latest generation of its Oracle Exadata Database Machines. The Oracle Exadata X3 Database In-Memory Machine is a key component of the Oracle Cloud. Oracle Exadata X3-2 Database In-Memory Machine and Oracle Exadata X3-8 Database In-Memory Machine can store up to hundreds of Terabytes of compressed user data in Flash and RAM memory, virtually eliminating the performance overhead of reads and writes to slow disk drives, making Exadata X3 systems the ideal database platforms for the varied and unpredictable workloads of cloud computing. In order to realize the highest performance at the lowest cost, the Oracle Exadata X3 Database In-Memory Machine implements a mass memory hierarchy that automatically moves all active data into Flash and RAM memory, while keeping less active data on low-cost disks. With a new Eighth-Rack configuration, the Oracle Exadata X3-2 Database In-Memory Machine delivers a cost-effective entry point for smaller workloads, testing, development and disaster recovery systems, and is a fully redundant system that can be used with mission critical applications. Next-Generation Technologies Deliver Dramatic Performance Improvements Oracle Exadata X3 Database In-Memory Machines use a combination of scale-out servers and storage, InfiniBand networking, smart storage, PCI Flash, smart memory caching, and Hybrid Columnar Compression to deliver extreme performance and availability for all Oracle Database Workloads. Oracle Exadata X3 Database In-Memory Machine systems leverage next-generation technologies to deliver significant performance enhancements, including: Four times the Flash memory capacity of the previous generation; with up to 40 percent faster response times and 100 GB/second data scan rates. Combined with Exadata’s unique Hybrid Columnar Compression capabilities, hundreds of Terabytes of user data can now be managed entirely within Flash; 20 times more capacity for database writes through updated Exadata Smart Flash Cache software. The new Exadata Smart Flash Cache software also runs on previous generation Exadata systems, increasing their capacity for writes tenfold; 33 percent more database CPU cores in the Oracle Exadata X3-2 Database In-Memory Machine, using the latest 8-core Intel® Xeon E5-2600 series of processors; Expanded 10Gb Ethernet connectivity to the data center in the Oracle Exadata X3-2 provides 40 10Gb network ports per rack for connecting users and moving data; Up to 30 percent reduction in power and cooling. Configured for Your Business, Available Today Oracle Exadata X3-2 Database In-Memory Machine systems are available in a Full-Rack, Half-Rack, Quarter-Rack, and the new low-cost Eighth-Rack configuration to satisfy the widest range of applications. Oracle Exadata X3-8 Database In-Memory Machine systems are available in a Full-Rack configuration, and both X3 systems enable multi-rack configurations for virtually unlimited scalability. Oracle Exadata X3-2 and X3-8 Database In-Memory Machines are fully compatible with prior Exadata generations and existing systems can also be upgraded with Oracle Exadata X3-2 servers. Oracle Exadata X3 Database In-Memory Machine systems can be used immediately with any application certified with Oracle Database 11g R2 and Oracle Real Application Clusters, including SAP, Oracle Fusion Applications, Oracle’s PeopleSoft, Oracle’s Siebel CRM, the Oracle E-Business Suite, and thousands of other applications. Supporting Quotes “Forward-looking enterprises are moving towards Cloud Computing architectures,” said Andrew Mendelsohn, senior vice president, Oracle Database Server Technologies. “Oracle Exadata’s unique ability to run any database application on a fully scale-out architecture using a combination of massive memory for extreme performance and low-cost disk for high capacity delivers the ideal solution for Cloud-based database deployments today.” Supporting Resources Oracle Press Release Oracle Exadata Database Machine Oracle Exadata X3-2 Database In-Memory Machine Oracle Exadata X3-8 Database In-Memory Machine Oracle Database 11g Follow Oracle Database via Blog, Facebook and Twitter Oracle OpenWorld 2012 Oracle OpenWorld 2012 Keynotes Like Oracle OpenWorld on Facebook Follow Oracle OpenWorld on Twitter Oracle OpenWorld Blog Oracle OpenWorld on LinkedIn Mark Hurd's keynote with Andy Mendelsohn and Juan Loaiza - - watch for the replay to be available soon at http://www.youtube.com/user/Oracle or http://www.oracle.com/openworld/live/on-demand/index.html

    Read the article

  • What makes them click ?

    - by Piet
    The other day (well, actually some weeks ago while relaxing at the beach in Kos) I read ‘Neuro Web Design - What makes them click?’ by Susan Weinschenk. (http://neurowebbook.com) The book is a fast and easy read (no unnecessary filler) and a good introduction on how your site’s visitors can be steered in the direction you want them to go. The Obvious The book handles some of the more known/proven techniques, like for example that ratings/testimonials of other people can help sell your product or service. Another well known technique it talks about is inducing a sense of scarcity/urgency in the visitor. Only 2 seats left! Buy now and get 33% off! It’s not because these are known techniques that they stop working. Luckily 2/3rd of the book handles less obvious techniques, otherwise it wouldn’t be worth buying. The Not So Obvious A less known influencing technique is reciprocity. And then I’m not talking about swapping links with another website, but the fact that someone is more likely to do something for you after you did something for them first. The book cites some studies (I always love the facts and figures) and gives some actual examples of how to implement this in your site’s design, which is less obvious when you think about it. Want to know more ? Buy the book! Other interesting sources For a more general introduction to the same principles, I’d suggest ‘Yes! 50 Secrets from the Science of Persuasion’. ‘Yes!…’ cites some of the same studies (it seems there’s a rather limited pool of studies covering this subject), but of course doesn’t show how to implement these techniques in your site’s design. I read ‘Yes!…’ last year, making ‘Neuro Web Design’ just a little bit less interesting. !!!Always make sure you’re able to measure your changes. If you haven’t yet, check out the advanced segmentation in Google Analytics (don’t be afraid because it says ‘beta’, it works just fine) and Google Website Optimizer. Worth Buying? Can I recommend it ? Sure, why not. I think it can be useful for anyone who ever had to think about the design or content of a site. You don’t have to be a marketing guy to want a site you’re involved with to be successful. The content/filler ratio is excellent too: you don’t need to wade through dozens of pages to filter out the interesting bits. (unlike ‘The Design of Sites’, which contains too much useless info and because it’s in dead-tree format, you can’t google it) If you like it, you might also check out ‘Yes! 50 Secrets from the Science of Persuasion’. Tip for people living in Europe: check Amazon UK for your book buying needs. Because of the low UK Pound exchange rate, it’s usually considerably cheaper and faster to get a book delivered to your doorstep by Amazon UK compared to having to order it at the local book store or web-shop.

    Read the article

  • How to reproject a shapefile from WGS 84 to Spherical/Web Mercator projection.

    - by samkea
    Definitions: You will need to know the meaning of these terms below. I have given a small description to the acronyms but you can google and know more about them. #1:WGS-84- World Geodetic Systems (1984)- is a standard reference coordinate system used for Cartography, Geodesy and Navigation. #2: EPGS-European Petroleum Survey Group-was a scientific organization with ties to the European petroleum industry consisting of specialists working in applied geodesy, surveying, and cartography related to oil exploration. EPSG::4326 is a common coordinate reference system that refers to WGS84 as (latitude, longitude) pair coordinates in degrees with Greenwich as the central meridian. Any degree representation (e.g., decimal or DMSH: degrees minutes seconds hemisphere) may be used. Which degree representation is used must be declared for the user by the supplier of data. So, the Spherical/Web Mercator projection is referred to as EPGS::3785 which is renamed to EPSG:900913 by google for use in googlemaps. The associated CRS(Coordinate Reference System) for this is the "Popular Visualisation CRS / Mercator ". This is the kind of projection that is used by GoogleMaps, BingMaps,OSM,Virtual Earth, Deep Earth excetra...to show interactive maps over the web with thier nearly precise coordinates.  Reprojection: After reading alot about reprojecting my coordinates from the deepearth project on Codeplex, i still could not do it. After some help from a colleague, i got my ball rolling.This is how i did it. #1 You need to download and open your shapefile using Q-GIS; its the one with the biggest number of coordinate reference systems/ projections. #2 Use the plugins menu, and enable ftools and the WFS plugin. #3 Use the Vector menu--> Data Management Tools and choose define current projection. Enable, use predefined reference system and choose WGS 84 coodinate system. I am personally in zone 36, so i chose WGS84-UTM Zone 36N under ( Projected Coordinate Systems--> Universal Transverse Mercator) and click ok. #4 Now use the Vector menu--> Data Management Tools and choose export to new projection. The same dialog will pop-up. Now choose WGS 84 EPGS::4326 under Geodetic Coordinate Systems. My Input user Defined Spatial Reference System should looks like this: +proj=tmerc +lat_0=0 +lon_0=33 +k=0.9996 +x_0=500000 +y_0=200000 +ellps=WGS84 +datum=WGS84 +units=m +no_defs Your Output user Defined Spatial Reference System should look like this: +proj=longlat +ellps=WGS84 +datum=WGS84 +no_defs Browse for the place where the shapefile is going to be and give the shapefile a name(like origna_reprojected). If it prompts you to add the projected layer to the TOC, accept. There, you have your re-projected map with latitude and longitude pair of coordinates. #5 Now, this is not the actual Spherical/Web Mercator projection, but dont worry, this is where you have to stop. All the other custom web-mapping portals will pick this projection and transform it into EPGS::3785 or EPSG:900913 but the coordinates will still remain as the LatLon pair of the projected shapefile. If you want to test, a particular know point, Q-GIS has a lot of room for that. Go ahead and test it.

    Read the article

  • Silverlight Firestarter thoughts, and thanks to one and all!

    - by Dave Campbell
    A few metrics that of course got out of hand, but some may find interesting:   1/2 My share of the MVP of the Year award in February of 2009 with Laurent Bugnion 2 Number of degrees I hold: B.S., M.S. Electrical Engineering 3 Number of years in the U.S. Army 3.5 Number of years SilverlighCream has been posted 4 Number of times awarded MVP 6 Number of professional positions I've worked: Antenna Rigger, Boilermaker, Musician, Electronic Technician, Hardware Engineer, Software Engineer 16 Number of companies I've worked for during my career as an Engineer 19 Age at which I turned my first line of code 28 Age at which I hit the workforce as an Engineer 33 Number of years working as an Engineer 43 Number of years writing code 62 Number of years since instantiation 116 Number of tags to search SilverlightCream with 645 Number of blogs I view to find articles (at this moment) 664 Number of articles tagged wp7dev at SilverlightCream right now 700 Number of Twitter followers for WynApse 981 Number of individual bloggers in the SilverlightCream database 1002 Number of SilverlightCream blogposts 1100 Number of people live in Redmond for the Firestarter (I think) 1428 Number of total blogposts at GeeksWithBlogs (not counting this one) 4200 Number of Feedburner subscribers (approximately) 6500 Number of Twitter followers for SilverlightNews (approximately) 7087 Number of posts tagged and aggregated at SilverlightCream right now 13000 Number of people registered to watch the Firestarter online (I think) The overwhelming feeling I have returning from the Silverlight Firestarter: Priceless There is absolutely no way that I could personally thank everyone that over the last few years has held their hand out and offered me a step up to get to the point that Scott Guthrie called me out in his keynote. So I'm just going to hit the highlights here... Scott Guthrie Thanks for not only being the level you are at Microsoft, but for being so approachable, easy to talk to, willing to help everyone, and above all knowledgable. My first level manager at my last position asked if Visual Studio was a graphics program... and you step up to a laptop at a conference and type "File->New Program" ... 'nuff said... oh yeah, thanks for the shoutout! John Papa Thanks for being a good friend, ramroding the Firestarter, being a great guy to be around, and for the poster... holy crap is that cool. Tim Heuer Thanks for all you did as a great DE in Phoenix, and for helping out so many of us, of course being a great guy, and for the poster as well... I think you and John shared that task. In no order at all my buddy Michael Washington, Laurent Bugnion (the other half of the first Silverlight MVP of the Year) Tim Sneath, Mike Harsh, Chad Campbell and Bryant Likes (from back in the day), Adam Kinney, Jesse Liberty, Jeff Paries, Pete Brown, András Velvárt, David Kelly, Michael Palermo, Scott Cate, Erik Mork, and on and on... don't feel bad if your name didn't appear, I have simply too many supporters to name. Silverlight Firestarter Indeed All the people mentioned here, and all the MVPs knew Silverlight was NOT dead, but because of a very unfortunate circumstance, the popular media opinion became that. Consequently the Firestarter exploded from a laid-back event to a global conference. People worked their ass off getting bits ready and presentations using those bits. All to stem the flow of misinformation. All involved please accept my personal thanks for an absolutely awesome job. I had the priviledge of watching the 'prep' on Wednesday afternoon, and was blown away the first time I saw the 3D demo... and have been blown away every time I've seen it since. Not to mention all the other goodness in Silverlight 5. Yes I hit 1000 on my blog, but more importantly, all of you are blogging and using Silverlight, and Microsoft hit one completely out of the park... no... they knocked it out of the neighborhood with the Firestarter. It was amazing to be there for it, and it will be awesome to use the new bits as we get them. Keep reading, there's tons more to come with Silverlight and SilverlightCream following along behind. As usual, this old hacker is humbled to be allowed to play with all the cool kids... Thanks one and all for everything, and Stay in the 'Light

    Read the article

  • Silverlight Recruiting Application Part 5 - Jobs Module / View

    Now we starting getting into a more code-heavy portion of this series, thankfully though this means the groundwork is all set for the most part and after adding the modules we will have a complete application that can be provided with full source. The Jobs module will have two concerns- adding and maintaining jobs that can then be broadcast out to the website. How they are displayed on the site will be handled by our admin system (which will just poll from this common database), so we aren't too concerned with that, but rather with getting the information into the system and allowing the backend administration/HR users to keep things up to date. Since there is a fair bit of information that we want to display, we're going to move editing to a separate view so we can get all that information in an easy-to-use spot. With all the files created for this module, the project looks something like this: And now... on to the code. XAML for the Job Posting View All we really need for the Job Posting View is a RadGridView and a few buttons. This will let us both show off records and perform operations on the records without much hassle. That XAML is going to look something like this: 01.<Grid x:Name="LayoutRoot" 02.Background="White"> 03.<Grid.RowDefinitions> 04.<RowDefinition Height="30" /> 05.<RowDefinition /> 06.</Grid.RowDefinitions> 07.<StackPanel Orientation="Horizontal"> 08.<Button x:Name="xAddRecordButton" 09.Content="Add Job" 10.Width="120" 11.cal:Click.Command="{Binding AddRecord}" 12.telerik:StyleManager.Theme="Windows7" /> 13.<Button x:Name="xEditRecordButton" 14.Content="Edit Job" 15.Width="120" 16.cal:Click.Command="{Binding EditRecord}" 17.telerik:StyleManager.Theme="Windows7" /> 18.</StackPanel> 19.<telerikGrid:RadGridView x:Name="xJobsGrid" 20.Grid.Row="1" 21.IsReadOnly="True" 22.AutoGenerateColumns="False" 23.ColumnWidth="*" 24.RowDetailsVisibilityMode="VisibleWhenSelected" 25.ItemsSource="{Binding MyJobs}" 26.SelectedItem="{Binding SelectedJob, Mode=TwoWay}" 27.command:SelectedItemChangedEventClass.Command="{Binding SelectedItemChanged}"> 28.<telerikGrid:RadGridView.Columns> 29.<telerikGrid:GridViewDataColumn Header="Job Title" 30.DataMemberBinding="{Binding JobTitle}" 31.UniqueName="JobTitle" /> 32.<telerikGrid:GridViewDataColumn Header="Location" 33.DataMemberBinding="{Binding Location}" 34.UniqueName="Location" /> 35.<telerikGrid:GridViewDataColumn Header="Resume Required" 36.DataMemberBinding="{Binding NeedsResume}" 37.UniqueName="NeedsResume" /> 38.<telerikGrid:GridViewDataColumn Header="CV Required" 39.DataMemberBinding="{Binding NeedsCV}" 40.UniqueName="NeedsCV" /> 41.<telerikGrid:GridViewDataColumn Header="Overview Required" 42.DataMemberBinding="{Binding NeedsOverview}" 43.UniqueName="NeedsOverview" /> 44.<telerikGrid:GridViewDataColumn Header="Active" 45.DataMemberBinding="{Binding IsActive}" 46.UniqueName="IsActive" /> 47.</telerikGrid:RadGridView.Columns> 48.</telerikGrid:RadGridView> 49.</Grid> I'll explain what's happening here by line numbers: Lines 11 and 16: Using the same type of click commands as we saw in the Menu module, we tie the button clicks to delegate commands in the viewmodel. Line 25: The source for the jobs will be a collection in the viewmodel. Line 26: We also bind the selected item to a public property from the viewmodel for use in code. Line 27: We've turned the event into a command so we can handle it via code in the viewmodel. So those first three probably make sense to you as far as Silverlight/WPF binding magic is concerned, but for line 27... This actually comes from something I read onDamien Schenkelman's blog back in the day for creating an attached behavior from any event. So, any time you see me using command:Whatever.Command, the backing for it is actually something like this: SelectedItemChangedEventBehavior.cs: 01.public class SelectedItemChangedEventBehavior : CommandBehaviorBase<Telerik.Windows.Controls.DataControl> 02.{ 03.public SelectedItemChangedEventBehavior(DataControl element) 04.: base(element) 05.{ 06.element.SelectionChanged += new EventHandler<SelectionChangeEventArgs>(element_SelectionChanged); 07.} 08.void element_SelectionChanged(object sender, SelectionChangeEventArgs e) 09.{ 10.// We'll only ever allow single selection, so will only need item index 0 11.base.CommandParameter = e.AddedItems[0]; 12.base.ExecuteCommand(); 13.} 14.} SelectedItemChangedEventClass.cs: 01.public class SelectedItemChangedEventClass 02.{ 03.#region The Command Stuff 04.public static ICommand GetCommand(DependencyObject obj) 05.{ 06.return (ICommand)obj.GetValue(CommandProperty); 07.} 08.public static void SetCommand(DependencyObject obj, ICommand value) 09.{ 10.obj.SetValue(CommandProperty, value); 11.} 12.public static readonly DependencyProperty CommandProperty = 13.DependencyProperty.RegisterAttached("Command", typeof(ICommand), 14.typeof(SelectedItemChangedEventClass), new PropertyMetadata(OnSetCommandCallback)); 15.public static void OnSetCommandCallback(DependencyObject dependencyObject, DependencyPropertyChangedEventArgs e) 16.{ 17.DataControl element = dependencyObject as DataControl; 18.if (element != null) 19.{ 20.SelectedItemChangedEventBehavior behavior = GetOrCreateBehavior(element); 21.behavior.Command = e.NewValue as ICommand; 22.} 23.} 24.#endregion 25.public static SelectedItemChangedEventBehavior GetOrCreateBehavior(DataControl element) 26.{ 27.SelectedItemChangedEventBehavior behavior = element.GetValue(SelectedItemChangedEventBehaviorProperty) as SelectedItemChangedEventBehavior; 28.if (behavior == null) 29.{ 30.behavior = new SelectedItemChangedEventBehavior(element); 31.element.SetValue(SelectedItemChangedEventBehaviorProperty, behavior); 32.} 33.return behavior; 34.} 35.public static SelectedItemChangedEventBehavior GetSelectedItemChangedEventBehavior(DependencyObject obj) 36.{ 37.return (SelectedItemChangedEventBehavior)obj.GetValue(SelectedItemChangedEventBehaviorProperty); 38.} 39.public static void SetSelectedItemChangedEventBehavior(DependencyObject obj, SelectedItemChangedEventBehavior value) 40.{ 41.obj.SetValue(SelectedItemChangedEventBehaviorProperty, value); 42.} 43.public static readonly DependencyProperty SelectedItemChangedEventBehaviorProperty = 44.DependencyProperty.RegisterAttached("SelectedItemChangedEventBehavior", 45.typeof(SelectedItemChangedEventBehavior), typeof(SelectedItemChangedEventClass), null); 46.} These end up looking very similar from command to command, but in a nutshell you create a command based on any event, determine what the parameter for it will be, then execute. It attaches via XAML and ties to a DelegateCommand in the viewmodel, so you get the full event experience (since some controls get a bit event-rich for added functionality). Simple enough, right? Viewmodel for the Job Posting View The Viewmodel is going to need to handle all events going back and forth, maintaining interactions with the data we are using, and both publishing and subscribing to events. Rather than breaking this into tons of little pieces, I'll give you a nice view of the entire viewmodel and then hit up the important points line-by-line: 001.public class JobPostingViewModel : ViewModelBase 002.{ 003.private readonly IEventAggregator eventAggregator; 004.private readonly IRegionManager regionManager; 005.public DelegateCommand<object> AddRecord { get; set; } 006.public DelegateCommand<object> EditRecord { get; set; } 007.public DelegateCommand<object> SelectedItemChanged { get; set; } 008.public RecruitingContext context; 009.private QueryableCollectionView _myJobs; 010.public QueryableCollectionView MyJobs 011.{ 012.get { return _myJobs; } 013.} 014.private QueryableCollectionView _selectionJobActionHistory; 015.public QueryableCollectionView SelectedJobActionHistory 016.{ 017.get { return _selectionJobActionHistory; } 018.} 019.private JobPosting _selectedJob; 020.public JobPosting SelectedJob 021.{ 022.get { return _selectedJob; } 023.set 024.{ 025.if (value != _selectedJob) 026.{ 027._selectedJob = value; 028.NotifyChanged("SelectedJob"); 029.} 030.} 031.} 032.public SubscriptionToken editToken = new SubscriptionToken(); 033.public SubscriptionToken addToken = new SubscriptionToken(); 034.public JobPostingViewModel(IEventAggregator eventAgg, IRegionManager regionmanager) 035.{ 036.// set Unity items 037.this.eventAggregator = eventAgg; 038.this.regionManager = regionmanager; 039.// load our context 040.context = new RecruitingContext(); 041.this._myJobs = new QueryableCollectionView(context.JobPostings); 042.context.Load(context.GetJobPostingsQuery()); 043.// set command events 044.this.AddRecord = new DelegateCommand<object>(this.AddNewRecord); 045.this.EditRecord = new DelegateCommand<object>(this.EditExistingRecord); 046.this.SelectedItemChanged = new DelegateCommand<object>(this.SelectedRecordChanged); 047.SetSubscriptions(); 048.} 049.#region DelegateCommands from View 050.public void AddNewRecord(object obj) 051.{ 052.this.eventAggregator.GetEvent<AddJobEvent>().Publish(true); 053.} 054.public void EditExistingRecord(object obj) 055.{ 056.if (_selectedJob == null) 057.{ 058.this.eventAggregator.GetEvent<NotifyUserEvent>().Publish("No job selected."); 059.} 060.else 061.{ 062.this._myJobs.EditItem(this._selectedJob); 063.this.eventAggregator.GetEvent<EditJobEvent>().Publish(this._selectedJob); 064.} 065.} 066.public void SelectedRecordChanged(object obj) 067.{ 068.if (obj.GetType() == typeof(ActionHistory)) 069.{ 070.// event bubbles up so we don't catch items from the ActionHistory grid 071.} 072.else 073.{ 074.JobPosting job = obj as JobPosting; 075.GrabHistory(job.PostingID); 076.} 077.} 078.#endregion 079.#region Subscription Declaration and Events 080.public void SetSubscriptions() 081.{ 082.EditJobCompleteEvent editComplete = eventAggregator.GetEvent<EditJobCompleteEvent>(); 083.if (editToken != null) 084.editComplete.Unsubscribe(editToken); 085.editToken = editComplete.Subscribe(this.EditCompleteEventHandler); 086.AddJobCompleteEvent addComplete = eventAggregator.GetEvent<AddJobCompleteEvent>(); 087.if (addToken != null) 088.addComplete.Unsubscribe(addToken); 089.addToken = addComplete.Subscribe(this.AddCompleteEventHandler); 090.} 091.public void EditCompleteEventHandler(bool complete) 092.{ 093.if (complete) 094.{ 095.JobPosting thisJob = _myJobs.CurrentEditItem as JobPosting; 096.this._myJobs.CommitEdit(); 097.this.context.SubmitChanges((s) => 098.{ 099.ActionHistory myAction = new ActionHistory(); 100.myAction.PostingID = thisJob.PostingID; 101.myAction.Description = String.Format("Job '{0}' has been edited by {1}", thisJob.JobTitle, "default user"); 102.myAction.TimeStamp = DateTime.Now; 103.eventAggregator.GetEvent<AddActionEvent>().Publish(myAction); 104.} 105., null); 106.} 107.else 108.{ 109.this._myJobs.CancelEdit(); 110.} 111.this.MakeMeActive(this.regionManager, "MainRegion", "JobPostingsView"); 112.} 113.public void AddCompleteEventHandler(JobPosting job) 114.{ 115.if (job == null) 116.{ 117.// do nothing, new job add cancelled 118.} 119.else 120.{ 121.this.context.JobPostings.Add(job); 122.this.context.SubmitChanges((s) => 123.{ 124.ActionHistory myAction = new ActionHistory(); 125.myAction.PostingID = job.PostingID; 126.myAction.Description = String.Format("Job '{0}' has been added by {1}", job.JobTitle, "default user"); 127.myAction.TimeStamp = DateTime.Now; 128.eventAggregator.GetEvent<AddActionEvent>().Publish(myAction); 129.} 130., null); 131.} 132.this.MakeMeActive(this.regionManager, "MainRegion", "JobPostingsView"); 133.} 134.#endregion 135.public void GrabHistory(int postID) 136.{ 137.context.ActionHistories.Clear(); 138._selectionJobActionHistory = new QueryableCollectionView(context.ActionHistories); 139.context.Load(context.GetHistoryForJobQuery(postID)); 140.} Taking it from the top, we're injecting an Event Aggregator and Region Manager for use down the road and also have the public DelegateCommands (just like in the Menu module). We also grab a reference to our context, which we'll obviously need for data, then set up a few fields with public properties tied to them. We're also setting subscription tokens, which we have not yet seen but I will get into below. The AddNewRecord (50) and EditExistingRecord (54) methods should speak for themselves for functionality, the one thing of note is we're sending events off to the Event Aggregator which some module, somewhere will take care of. Since these aren't entirely relying on one another, the Jobs View doesn't care if anyone is listening, but it will publish AddJobEvent (52), NotifyUserEvent (58) and EditJobEvent (63)regardless. Don't mind the GrabHistory() method so much, that is just grabbing history items (visibly being created in the SubmitChanges callbacks), and adding them to the database. Every action will trigger a history event, so we'll know who modified what and when, just in case. ;) So where are we at? Well, if we click to Add a job, we publish an event, if we edit a job, we publish an event with the selected record (attained through the magic of binding). Where is this all going though? To the Viewmodel, of course! XAML for the AddEditJobView This is pretty straightforward except for one thing, noted below: 001.<Grid x:Name="LayoutRoot" 002.Background="White"> 003.<Grid x:Name="xEditGrid" 004.Margin="10" 005.validationHelper:ValidationScope.Errors="{Binding Errors}"> 006.<Grid.Background> 007.<LinearGradientBrush EndPoint="0.5,1" 008.StartPoint="0.5,0"> 009.<GradientStop Color="#FFC7C7C7" 010.Offset="0" /> 011.<GradientStop Color="#FFF6F3F3" 012.Offset="1" /> 013.</LinearGradientBrush> 014.</Grid.Background> 015.<Grid.RowDefinitions> 016.<RowDefinition Height="40" /> 017.<RowDefinition Height="40" /> 018.<RowDefinition Height="40" /> 019.<RowDefinition Height="100" /> 020.<RowDefinition Height="100" /> 021.<RowDefinition Height="100" /> 022.<RowDefinition Height="40" /> 023.<RowDefinition Height="40" /> 024.<RowDefinition Height="40" /> 025.</Grid.RowDefinitions> 026.<Grid.ColumnDefinitions> 027.<ColumnDefinition Width="150" /> 028.<ColumnDefinition Width="150" /> 029.<ColumnDefinition Width="300" /> 030.<ColumnDefinition Width="100" /> 031.</Grid.ColumnDefinitions> 032.<!-- Title --> 033.<TextBlock Margin="8" 034.Text="{Binding AddEditString}" 035.TextWrapping="Wrap" 036.Grid.Column="1" 037.Grid.ColumnSpan="2" 038.FontSize="16" /> 039.<!-- Data entry area--> 040. 041.<TextBlock Margin="8,0,0,0" 042.Style="{StaticResource LabelTxb}" 043.Grid.Row="1" 044.Text="Job Title" 045.VerticalAlignment="Center" /> 046.<TextBox x:Name="xJobTitleTB" 047.Margin="0,8" 048.Grid.Column="1" 049.Grid.Row="1" 050.Text="{Binding activeJob.JobTitle, Mode=TwoWay, NotifyOnValidationError=True, ValidatesOnExceptions=True}" 051.Grid.ColumnSpan="2" /> 052.<TextBlock Margin="8,0,0,0" 053.Grid.Row="2" 054.Text="Location" 055.d:LayoutOverrides="Height" 056.VerticalAlignment="Center" /> 057.<TextBox x:Name="xLocationTB" 058.Margin="0,8" 059.Grid.Column="1" 060.Grid.Row="2" 061.Text="{Binding activeJob.Location, Mode=TwoWay, NotifyOnValidationError=True, ValidatesOnExceptions=True}" 062.Grid.ColumnSpan="2" /> 063. 064.<TextBlock Margin="8,11,8,0" 065.Grid.Row="3" 066.Text="Description" 067.TextWrapping="Wrap" 068.VerticalAlignment="Top" /> 069. 070.<TextBox x:Name="xDescriptionTB" 071.Height="84" 072.TextWrapping="Wrap" 073.ScrollViewer.VerticalScrollBarVisibility="Auto" 074.Grid.Column="1" 075.Grid.Row="3" 076.Text="{Binding activeJob.Description, Mode=TwoWay, NotifyOnValidationError=True, ValidatesOnExceptions=True}" 077.Grid.ColumnSpan="2" /> 078.<TextBlock Margin="8,11,8,0" 079.Grid.Row="4" 080.Text="Requirements" 081.TextWrapping="Wrap" 082.VerticalAlignment="Top" /> 083. 084.<TextBox x:Name="xRequirementsTB" 085.Height="84" 086.TextWrapping="Wrap" 087.ScrollViewer.VerticalScrollBarVisibility="Auto" 088.Grid.Column="1" 089.Grid.Row="4" 090.Text="{Binding activeJob.Requirements, Mode=TwoWay, NotifyOnValidationError=True, ValidatesOnExceptions=True}" 091.Grid.ColumnSpan="2" /> 092.<TextBlock Margin="8,11,8,0" 093.Grid.Row="5" 094.Text="Qualifications" 095.TextWrapping="Wrap" 096.VerticalAlignment="Top" /> 097. 098.<TextBox x:Name="xQualificationsTB" 099.Height="84" 100.TextWrapping="Wrap" 101.ScrollViewer.VerticalScrollBarVisibility="Auto" 102.Grid.Column="1" 103.Grid.Row="5" 104.Text="{Binding activeJob.Qualifications, Mode=TwoWay, NotifyOnValidationError=True, ValidatesOnExceptions=True}" 105.Grid.ColumnSpan="2" /> 106.<!-- Requirements Checkboxes--> 107. 108.<CheckBox x:Name="xResumeRequiredCB" Margin="8,8,8,15" 109.Content="Resume Required" 110.Grid.Row="6" 111.Grid.ColumnSpan="2" 112.IsChecked="{Binding activeJob.NeedsResume, Mode=TwoWay, NotifyOnValidationError=True, ValidatesOnExceptions=True}"/> 113. 114.<CheckBox x:Name="xCoverletterRequiredCB" Margin="8,8,8,15" 115.Content="Cover Letter Required" 116.Grid.Column="2" 117.Grid.Row="6" 118.IsChecked="{Binding activeJob.NeedsCV, Mode=TwoWay, NotifyOnValidationError=True, ValidatesOnExceptions=True}"/> 119. 120.<CheckBox x:Name="xOverviewRequiredCB" Margin="8,8,8,15" 121.Content="Overview Required" 122.Grid.Row="7" 123.Grid.ColumnSpan="2" 124.IsChecked="{Binding activeJob.NeedsOverview, Mode=TwoWay, NotifyOnValidationError=True, ValidatesOnExceptions=True}"/> 125. 126.<CheckBox x:Name="xJobActiveCB" Margin="8,8,8,15" 127.Content="Job is Active" 128.Grid.Column="2" 129.Grid.Row="7" 130.IsChecked="{Binding activeJob.IsActive, Mode=TwoWay, NotifyOnValidationError=True, ValidatesOnExceptions=True}"/> 131. 132.<!-- Buttons --> 133. 134.<Button x:Name="xAddEditButton" Margin="8,8,0,10" 135.Content="{Binding AddEditButtonString}" 136.cal:Click.Command="{Binding AddEditCommand}" 137.Grid.Column="2" 138.Grid.Row="8" 139.HorizontalAlignment="Left" 140.Width="125" 141.telerik:StyleManager.Theme="Windows7" /> 142. 143.<Button x:Name="xCancelButton" HorizontalAlignment="Right" 144.Content="Cancel" 145.cal:Click.Command="{Binding CancelCommand}" 146.Margin="0,8,8,10" 147.Width="125" 148.Grid.Column="2" 149.Grid.Row="8" 150.telerik:StyleManager.Theme="Windows7" /> 151.</Grid> 152.</Grid> The 'validationHelper:ValidationScope' line may seem odd. This is a handy little trick for catching current and would-be validation errors when working in this whole setup. This all comes from an approach found on theJoy Of Code blog, although it looks like the story for this will be changing slightly with new advances in SL4/WCF RIA Services, so this section can definitely get an overhaul a little down the road. The code is the fun part of all this, so let us see what's happening under the hood. Viewmodel for the AddEditJobView We are going to see some of the same things happening here, so I'll skip over the repeat info and get right to the good stuff: 001.public class AddEditJobViewModel : ViewModelBase 002.{ 003.private readonly IEventAggregator eventAggregator; 004.private readonly IRegionManager regionManager; 005. 006.public RecruitingContext context; 007. 008.private JobPosting _activeJob; 009.public JobPosting activeJob 010.{ 011.get { return _activeJob; } 012.set 013.{ 014.if (_activeJob != value) 015.{ 016._activeJob = value; 017.NotifyChanged("activeJob"); 018.} 019.} 020.} 021. 022.public bool isNewJob; 023. 024.private string _addEditString; 025.public string AddEditString 026.{ 027.get { return _addEditString; } 028.set 029.{ 030.if (_addEditString != value) 031.{ 032._addEditString = value; 033.NotifyChanged("AddEditString"); 034.} 035.} 036.} 037. 038.private string _addEditButtonString; 039.public string AddEditButtonString 040.{ 041.get { return _addEditButtonString; } 042.set 043.{ 044.if (_addEditButtonString != value) 045.{ 046._addEditButtonString = value; 047.NotifyChanged("AddEditButtonString"); 048.} 049.} 050.} 051. 052.public SubscriptionToken addJobToken = new SubscriptionToken(); 053.public SubscriptionToken editJobToken = new SubscriptionToken(); 054. 055.public DelegateCommand<object> AddEditCommand { get; set; } 056.public DelegateCommand<object> CancelCommand { get; set; } 057. 058.private ObservableCollection<ValidationError> _errors = new ObservableCollection<ValidationError>(); 059.public ObservableCollection<ValidationError> Errors 060.{ 061.get { return _errors; } 062.} 063. 064.private ObservableCollection<ValidationResult> _valResults = new ObservableCollection<ValidationResult>(); 065.public ObservableCollection<ValidationResult> ValResults 066.{ 067.get { return this._valResults; } 068.} 069. 070.public AddEditJobViewModel(IEventAggregator eventAgg, IRegionManager regionmanager) 071.{ 072.// set Unity items 073.this.eventAggregator = eventAgg; 074.this.regionManager = regionmanager; 075. 076.context = new RecruitingContext(); 077. 078.AddEditCommand = new DelegateCommand<object>(this.AddEditJobCommand); 079.CancelCommand = new DelegateCommand<object>(this.CancelAddEditCommand); 080. 081.SetSubscriptions(); 082.} 083. 084.#region Subscription Declaration and Events 085. 086.public void SetSubscriptions() 087.{ 088.AddJobEvent addJob = this.eventAggregator.GetEvent<AddJobEvent>(); 089. 090.if (addJobToken != null) 091.addJob.Unsubscribe(addJobToken); 092. 093.addJobToken = addJob.Subscribe(this.AddJobEventHandler); 094. 095.EditJobEvent editJob = this.eventAggregator.GetEvent<EditJobEvent>(); 096. 097.if (editJobToken != null) 098.editJob.Unsubscribe(editJobToken); 099. 100.editJobToken = editJob.Subscribe(this.EditJobEventHandler); 101.} 102. 103.public void AddJobEventHandler(bool isNew) 104.{ 105.this.activeJob = null; 106.this.activeJob = new JobPosting(); 107.this.activeJob.IsActive = true; // We assume that we want a new job to go up immediately 108.this.isNewJob = true; 109.this.AddEditString = "Add New Job Posting"; 110.this.AddEditButtonString = "Add Job"; 111. 112.MakeMeActive(this.regionManager, "MainRegion", "AddEditJobView"); 113.} 114. 115.public void EditJobEventHandler(JobPosting editJob) 116.{ 117.this.activeJob = null; 118.this.activeJob = editJob; 119.this.isNewJob = false; 120.this.AddEditString = "Edit Job Posting"; 121.this.AddEditButtonString = "Edit Job"; 122. 123.MakeMeActive(this.regionManager, "MainRegion", "AddEditJobView"); 124.} 125. 126.#endregion 127. 128.#region DelegateCommands from View 129. 130.public void AddEditJobCommand(object obj) 131.{ 132.if (this.Errors.Count > 0) 133.{ 134.List<string> errorMessages = new List<string>(); 135. 136.foreach (var valR in this.Errors) 137.{ 138.errorMessages.Add(valR.Exception.Message); 139.} 140. 141.this.eventAggregator.GetEvent<DisplayValidationErrorsEvent>().Publish(errorMessages); 142. 143.} 144.else if (!Validator.TryValidateObject(this.activeJob, new ValidationContext(this.activeJob, null, null), _valResults, true)) 145.{ 146.List<string> errorMessages = new List<string>(); 147. 148.foreach (var valR in this._valResults) 149.{ 150.errorMessages.Add(valR.ErrorMessage); 151.} 152. 153.this._valResults.Clear(); 154. 155.this.eventAggregator.GetEvent<DisplayValidationErrorsEvent>().Publish(errorMessages); 156.} 157.else 158.{ 159.if (this.isNewJob) 160.{ 161.this.eventAggregator.GetEvent<AddJobCompleteEvent>().Publish(this.activeJob); 162.} 163.else 164.{ 165.this.eventAggregator.GetEvent<EditJobCompleteEvent>().Publish(true); 166.} 167.} 168.} 169. 170.public void CancelAddEditCommand(object obj) 171.{ 172.if (this.isNewJob) 173.{ 174.this.eventAggregator.GetEvent<AddJobCompleteEvent>().Publish(null); 175.} 176.else 177.{ 178.this.eventAggregator.GetEvent<EditJobCompleteEvent>().Publish(false); 179.} 180.} 181. 182.#endregion 183.} 184.} We start seeing something new on line 103- the AddJobEventHandler will create a new job and set that to the activeJob item on the ViewModel. When this is all set, the view calls that familiar MakeMeActive method to activate itself. I made a bit of a management call on making views self-activate like this, but I figured it works for one reason. As I create this application, views may not exist that I have in mind, so after a view receives its 'ping' from being subscribed to an event, it prepares whatever it needs to do and then goes active. This way if I don't have 'edit' hooked up, I can click as the day is long on the main view and won't get lost in an empty region. Total personal preference here. :) Everything else should again be pretty straightforward, although I do a bit of validation checking in the AddEditJobCommand, which can either fire off an event back to the main view/viewmodel if everything is a success or sent a list of errors to our notification module, which pops open a RadWindow with the alerts if any exist. As a bonus side note, here's what my WCF RIA Services metadata looks like for handling all of the validation: private JobPostingMetadata() { } [StringLength(2500, ErrorMessage = "Description should be more than one and less than 2500 characters.", MinimumLength = 1)] [Required(ErrorMessage = "Description is required.")] public string Description; [Required(ErrorMessage="Active Status is Required")] public bool IsActive; [StringLength(100, ErrorMessage = "Posting title must be more than 3 but less than 100 characters.", MinimumLength = 3)] [Required(ErrorMessage = "Job Title is required.")] public bool JobTitle; [Required] public string Location; public bool NeedsCV; public bool NeedsOverview; public bool NeedsResume; public int PostingID; [Required(ErrorMessage="Qualifications are required.")] [StringLength(2500, ErrorMessage="Qualifications should be more than one and less than 2500 characters.", MinimumLength=1)] public string Qualifications; [StringLength(2500, ErrorMessage = "Requirements should be more than one and less than 2500 characters.", MinimumLength = 1)] [Required(ErrorMessage="Requirements are required.")] public string Requirements;   The RecruitCB Alternative See all that Xaml I pasted above? Those are now two pieces sitting in the JobsView.xaml file now. The only real difference is that the xEditGrid now sits in the same place as xJobsGrid, with visibility swapping out between the two for a quick switch. I also took out all the cal: and command: command references and replaced Button events with clicks and the Grid selection command replaced with a SelectedItemChanged event. Also, at the bottom of the xEditGrid after the last button, I add a ValidationSummary (with Visibility=Collapsed) to catch any errors that are popping up. Simple as can be, and leads to this being the single code-behind file: 001.public partial class JobsView : UserControl 002.{ 003.public RecruitingContext context; 004.public JobPosting activeJob; 005.public bool isNew; 006.private ObservableCollection<ValidationResult> _valResults = new ObservableCollection<ValidationResult>(); 007.public ObservableCollection<ValidationResult> ValResults 008.{ 009.get { return this._valResults; } 010.} 011.public JobsView() 012.{ 013.InitializeComponent(); 014.this.Loaded += new RoutedEventHandler(JobsView_Loaded); 015.} 016.void JobsView_Loaded(object sender, RoutedEventArgs e) 017.{ 018.context = new RecruitingContext(); 019.xJobsGrid.ItemsSource = context.JobPostings; 020.context.Load(context.GetJobPostingsQuery()); 021.} 022.private void xAddRecordButton_Click(object sender, RoutedEventArgs e) 023.{ 024.activeJob = new JobPosting(); 025.isNew = true; 026.xAddEditTitle.Text = "Add a Job Posting"; 027.xAddEditButton.Content = "Add"; 028.xEditGrid.DataContext = activeJob; 029.HideJobsGrid(); 030.} 031.private void xEditRecordButton_Click(object sender, RoutedEventArgs e) 032.{ 033.activeJob = xJobsGrid.SelectedItem as JobPosting; 034.isNew = false; 035.xAddEditTitle.Text = "Edit a Job Posting"; 036.xAddEditButton.Content = "Edit"; 037.xEditGrid.DataContext = activeJob; 038.HideJobsGrid(); 039.} 040.private void xAddEditButton_Click(object sender, RoutedEventArgs e) 041.{ 042.if (!Validator.TryValidateObject(this.activeJob, new ValidationContext(this.activeJob, null, null), _valResults, true)) 043.{ 044.List<string> errorMessages = new List<string>(); 045.foreach (var valR in this._valResults) 046.{ 047.errorMessages.Add(valR.ErrorMessage); 048.} 049.this._valResults.Clear(); 050.ShowErrors(errorMessages); 051.} 052.else if (xSummary.Errors.Count > 0) 053.{ 054.List<string> errorMessages = new List<string>(); 055.foreach (var err in xSummary.Errors) 056.{ 057.errorMessages.Add(err.Message); 058.} 059.ShowErrors(errorMessages); 060.} 061.else 062.{ 063.if (this.isNew) 064.{ 065.context.JobPostings.Add(activeJob); 066.context.SubmitChanges((s) => 067.{ 068.ActionHistory thisAction = new ActionHistory(); 069.thisAction.PostingID = activeJob.PostingID; 070.thisAction.Description = String.Format("Job '{0}' has been edited by {1}", activeJob.JobTitle, "default user"); 071.thisAction.TimeStamp = DateTime.Now; 072.context.ActionHistories.Add(thisAction); 073.context.SubmitChanges(); 074.}, null); 075.} 076.else 077.{ 078.context.SubmitChanges((s) => 079.{ 080.ActionHistory thisAction = new ActionHistory(); 081.thisAction.PostingID = activeJob.PostingID; 082.thisAction.Description = String.Format("Job '{0}' has been added by {1}", activeJob.JobTitle, "default user"); 083.thisAction.TimeStamp = DateTime.Now; 084.context.ActionHistories.Add(thisAction); 085.context.SubmitChanges(); 086.}, null); 087.} 088.ShowJobsGrid(); 089.} 090.} 091.private void xCancelButton_Click(object sender, RoutedEventArgs e) 092.{ 093.ShowJobsGrid(); 094.} 095.private void ShowJobsGrid() 096.{ 097.xAddEditRecordButtonPanel.Visibility = Visibility.Visible; 098.xEditGrid.Visibility = Visibility.Collapsed; 099.xJobsGrid.Visibility = Visibility.Visible; 100.} 101.private void HideJobsGrid() 102.{ 103.xAddEditRecordButtonPanel.Visibility = Visibility.Collapsed; 104.xJobsGrid.Visibility = Visibility.Collapsed; 105.xEditGrid.Visibility = Visibility.Visible; 106.} 107.private void ShowErrors(List<string> errorList) 108.{ 109.string nm = "Errors received: \n"; 110.foreach (string anerror in errorList) 111.nm += anerror + "\n"; 112.RadWindow.Alert(nm); 113.} 114.} The first 39 lines should be pretty familiar, not doing anything too unorthodox to get this up and running. Once we hit the xAddEditButton_Click on line 40, we're still doing pretty much the same things except instead of checking the ValidationHelper errors, we both run a check on the current activeJob object as well as check the ValidationSummary errors list. Once that is set, we again use the callback of context.SubmitChanges (lines 68 and 78) to create an ActionHistory which we will use to track these items down the line. That's all? Essentially... yes. If you look back through this post, most of the code and adventures we have taken were just to get things working in the MVVM/Prism setup. Since I have the whole 'module' self-contained in a single JobView+code-behind setup, I don't have to worry about things like sending events off into space for someone to pick up, communicating through an Infrastructure project, or even re-inventing events to be used with attached behaviors. Everything just kinda works, and again with much less code. Here's a picture of the MVVM and Code-behind versions on the Jobs and AddEdit views, but since the functionality is the same in both apps you still cannot tell them apart (for two-strike): Looking ahead, the Applicants module is effectively the same thing as the Jobs module, so most of the code is being cut-and-pasted back and forth with minor tweaks here and there. So that one is being taken care of by me behind the scenes. Next time, we get into a new world of fun- the interview scheduling module, which will pull from available jobs and applicants for each interview being scheduled, tying everything together with RadScheduler to the rescue. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Regular Expressions Cookbook Is in The Money—Win a Copy

    - by Jan Goyvaerts
    %COOKBOOKFRAME%You may have heard some people say that most book authors never get any royalties. That’s not true because most authors get an advance royalty that is paid before the book is published. That’s the author’s main incentive for writing the book, at least as far as money is concerned. (If money is your main concern, don’t write books.) What is true is that most authors never see any money beyond the advance royalty. Royalty rates are very low. A 10% royalty of the publisher’s price is considered normal. The publisher’s price is usually 45% of the retail price. So if you pay full price in a bookstore, the author gets 4.5% of your money. If there’s more than one author, they split the royalty. It doesn’t take a math degree to figure out that a book needs to sell quite a few copies for the royalty to add up to a meaningful amount of money. But Steven and I must have done something right. Regular Expressions Cookbook is in the money. My royalty statement for the 3rd quartier of 2009, which is the 2nd quarter that the book was on the market, came with a check. I actually received it last month but didn’t get around to blogging about. The amount of the check is insignificant. The point is that the balance is no longer negative. I’m taking this opportunity to pat myself and my co-author on the back. To celebrate the occassion O’Reilly has offered to sponsor a give-away of five (5) copies of Regular Expressions Cookbook. These are the rules of the game: You must post a comment to this blog article including your actual name and actual email address. Names are published, email addresses are not. Comments are moderated by myself (Jan Goyvaerts). If I consider a comment to be offensive or spam it will not be published and not be eligible for any prize. If you don’t know what to say in the comment, just wish me a happy 100000nd birthday, so I don’t have to feel so bad about entering the 6-bit era. Each person commenting has only one chance to win, regardless of the number of comments posted. O’Reilly will be provided with the names and email addresses of the winners (and those email addresses only) in order to arrange delivery. Each winner can choose to receive a printed copy or ebook (DRM-free PDF). If you choose the printed book, O’Reilly pays for shipping to anywhere in the world but not for any duties or taxes your country may impose on books imported from the USA. If you choose the ebook, you’ll need to create an O’Reilly account that is then granted access to the PDF download. You can make your choice after you’ve won, so it doesn’t influence your chance of winning. Contest ends 28 February 2010, GMT+7 (Thai time). Chosen by five calls to Random(78)+1 in Delphi 2010, the winners are: 48: Xiaozu 45: David Chisholm 19: Miquel Burns 33: Aaron Rice 17: David Laing Thanks to everybody who participated. The winners have been notified by email on how to collect their prize.

    Read the article

  • Oracle Social Network Developer Challenge: TEAM Informatics

    - by Kellsey Ruppel
    Originally posted by Jake Kuramoto on The Apps Lab blog. Here comes another Oracle Social Network Developer Challenge entry, this one courtesy of TEAM Informatics (@teaminformatics). As their name suggests, their entry was a true team effort, featuring the work of Jon Chartrand, Deepthi Sanikommu, Dmitry Shtulman, Raghavendra Joshi, and Daniel Stitely with Wayne Boerger doing the presentation honors. Speaking of the presentation, Wayne’s laptop wouldn’t project onto the plasma we had in the OTN Lounge, but luckily, Noel (@noelportugal) had his iPad and VGA dongle in his backpack of goodies, so they were able to improvise by using the iPad camera to capture Wayne’s demo and project the video to the plasma. Code will find a way. Anyway, TEAM built Do Over, an integration with Atlassian’s JIRA, coincidentally something I’ve chatted with Rich (@rmanalan) about in the past. The basic idea is simple; integrate JIRA issues with Oracle Social Network to expand and centralize the conversation around issue resolution. In Dmitry’s words: We were able to put together a team on fairly short notice and, after batting a few ideas around, decided to pursue an integration with JIRA, an issue and project tracking tool used in-house at TEAM.  After getting to know WebCenter Social, we saw immediate benefits that a JIRA integration could bring, primarily due to the fact that JIRA only allows assignment of an issue to one person at a time.  Integrating Social would allow collaboration and issue resolution to happen right from the JIRA Issue interface. TEAM tackled a very common pain point among developers, i.e. including everyone who needs to be involved in issue resolution into a single thread. If you’ve ever fixed bugs or participated in that process, you’ll know that not everyone has access to the issue resolution system, which makes consolidating discussion time-consuming and fragmented. Why? Because we typically use email as the tool for collaboration. Oracle Social Network allows for all parties involved to work in a single, private and secure conversation, and through its RESTful Public API, information from external systems like JIRA can be brought in for context. TEAM only had time to address half the solution, but given more time, I’m sure they would have made the integration bidirectional, allowing for relevant commentary to be pushed back to JIRA, closing the loop. Here are some screenshot of their integration. #gallery-1 { margin: auto; } #gallery-1 .gallery-item { float: left; margin-top: 10px; text-align: center; width: 33%; } #gallery-1 img { border: 2px solid #cfcfcf; } #gallery-1 .gallery-caption { margin-left: 0; } When Oracle Social Network is released, TEAM will have something they use internally to work on issues, and maybe they’ll even productize their work and add it to the Atlassian Marketplace so that other JIRA users can benefit from the combination of Oracle Social Network and JIRA. Thanks to everyone at TEAM for participating in our challenge. We hope they had a good experience. Look for the details of the other entries this week. Be sure to check out a full recap from Dmitry over on the TEAM blog.

    Read the article

  • Another sound not working post

    - by Thomas Smart
    Tried all the other "sound not working" posts i think, lost count. purge/reinstall alsa and pulse, reboot, add user to audio group, various lines in the alsa config file such as "options snd-hda-intel model=" then tried different options like generic, auto, basic, default, etc. tried pulseaudio -k && sudo alsa force-reload a few times, with and without rebooting. Hardware: 16gb ram, core I7-4790, Intel Haswell mboard with onboard sound and graphics Multimedia: Audio Adapter: HDA-Intel-HDA Intel HDMI OS: Ubuntu server 14.04 with ubuntu-desktop installed. GUI sound settings lists only the dummy sound card alsamixer -c 0 ¦ Card: HDA Intel HDMI F1: Help ¦ ¦ Chip: Intel Haswell HDMI F2: System information ¦ ¦ View: F3:[Playback] F4: Capture F5: All F6: Select sound card ¦ ¦ Item: S/PDIF ¦ ¦ +--+ ¦ ¦ ¦OO¦ ¦ ¦ +--+ ¦ ¦ < S/PDIF > ¦ aplay -l **** List of PLAYBACK Hardware Devices **** card 0: HDMI [HDA Intel HDMI], device 3: HDMI 0 [HDMI 0] Subdevices: 1/1 Subdevice #0: subdevice #0 aplay -L default Playback/recording through the PulseAudio sound server null Discard all samples (playback) or generate zero samples (capture) pulse PulseAudio Sound Server hdmi:CARD=HDMI,DEV=0 HDA Intel HDMI, HDMI 0 HDMI Audio Output dmix:CARD=HDMI,DEV=3 HDA Intel HDMI, HDMI 0 Direct sample mixing device dsnoop:CARD=HDMI,DEV=3 HDA Intel HDMI, HDMI 0 Direct sample snooping device hw:CARD=HDMI,DEV=3 HDA Intel HDMI, HDMI 0 Direct hardware device without any conversions plughw:CARD=HDMI,DEV=3 HDA Intel HDMI, HDMI 0 Hardware device with all software conversions cat /proc/asound/cards 0 [HDMI ]: HDA-Intel - HDA Intel HDMI HDA Intel HDMI at 0xf7d14000 irq 46 cat /proc/asound/devices 1: : sequencer 2: [ 0- 3]: digital audio playback 3: [ 0- 0]: hardware dependent 4: [ 0] : control 33: : timer mplayer -ao alsa:device=hdmi /usr/share/sounds/ubuntu/stereo/system-ready.ogg MPlayer 1.1-4.8 (C) 2000-2012 MPlayer Team mplayer: could not connect to socket mplayer: No such file or directory Failed to open LIRC support. You will not be able to use your remote control. Playing /usr/share/sounds/ubuntu/stereo/system-ready.ogg. libavformat version 54.20.4 (external) Mismatching header version 54.20.3 libavformat file format detected. [lavf] stream 0: audio (vorbis), -aid 0 Load subtitles in /usr/share/sounds/ubuntu/stereo/ ========================================================================== Opening audio decoder: [ffmpeg] FFmpeg/libavcodec audio decoders libavcodec version 54.35.0 (external) AUDIO: 44100 Hz, 1 ch, floatle, 80.0 kbit/5.67% (ratio: 10000->176400) Selected audio codec: [ffvorbis] afm: ffmpeg (FFmpeg Vorbis) ========================================================================== [AO_ALSA] alsa-lib: confmisc.c:768:(parse_card) cannot find card '1' [AO_ALSA] alsa-lib: conf.c:4248:(_snd_config_evaluate) function snd_func_card_driver returned error: No such file or directory [AO_ALSA] alsa-lib: confmisc.c:392:(snd_func_concat) error evaluating strings [AO_ALSA] alsa-lib: conf.c:4248:(_snd_config_evaluate) function snd_func_concat returned error: No such file or directory [AO_ALSA] alsa-lib: confmisc.c:1251:(snd_func_refer) error evaluating name [AO_ALSA] alsa-lib: conf.c:4248:(_snd_config_evaluate) function snd_func_refer returned error: No such file or directory [AO_ALSA] alsa-lib: conf.c:4727:(snd_config_expand) Evaluate error: No such file or directory [AO_ALSA] alsa-lib: pcm.c:2239:(snd_pcm_open_noupdate) Unknown PCM hdmi [AO_ALSA] Playback open error: No such file or directory Failed to initialize audio driver 'alsa:device=hdmi' Could not open/initialize audio device -> no sound. Audio: no sound Video: no video Exiting... (End of file) mplayer -ao alsa:device=hw=0.3 /usr/share/sounds/ubuntu/stereo/system-ready.ogg MPlayer 1.1-4.8 (C) 2000-2012 MPlayer Team mplayer: could not connect to socket mplayer: No such file or directory Failed to open LIRC support. You will not be able to use your remote control. Playing /usr/share/sounds/ubuntu/stereo/system-ready.ogg. libavformat version 54.20.4 (external) Mismatching header version 54.20.3 libavformat file format detected. [lavf] stream 0: audio (vorbis), -aid 0 Load subtitles in /usr/share/sounds/ubuntu/stereo/ ========================================================================== Opening audio decoder: [ffmpeg] FFmpeg/libavcodec audio decoders libavcodec version 54.35.0 (external) AUDIO: 44100 Hz, 1 ch, floatle, 80.0 kbit/5.67% (ratio: 10000->176400) Selected audio codec: [ffvorbis] afm: ffmpeg (FFmpeg Vorbis) ========================================================================== [AO_ALSA] Format floatle is not supported by hardware, trying default. AO: [alsa] 44100Hz 2ch s16le (2 bytes per sample) Video: no video Starting playback... A: 0.4 (00.4) of 0.8 (00.7) 0.1% Exiting... (End of file) Thank you for your time and help :)

    Read the article

  • Oracle Social Network and the Flying Monkey Smart Target

    - by kellsey.ruppel
    Originally posted by Jake Kuramoto on The Apps Lab blog. I teased this before OpenWorld, and for those of you who didn’t make it to the show or didn’t come by the Office Hours to take the Oracle Social Network Technical Tour Noel (@noelportugal) ran, I give you the Flying Monkey Smart Target. In brief, Noel built a target, about two feet tall, which when struck, played monkey sounds and posted a comment to an Oracle Social Network Conversation, all controlled by a Raspberry Pi. He also connected a Dropcam to record the winner just prior to the strike. I’m not sure how it all works, but maybe Noel can post the technical specifics. Here’s Noel describing the Challenge, the Target and a few other tidbit in an interview with Friend of the ‘Lab, Bob Rhubart (@brhubart). The monkey target bits are 2:12-2:54 if you’re into brevity, but watch the whole thing. Here are some screen grabs from the Oracle Social Network Conversation, including the Conversation itself, where you can see all the strikes documented, the picture captured, and the annotation capabilities: #gallery-1 { margin: auto;? } #gallery-1 .gallery-item { float: left; margin-top: 10px; text-align: center; width: 33%; } #gallery-1 img { border: 2px solid #cfcfcf; } #gallery-1 .gallery-caption { margin-left: 0; }    That’s Diego in one shot, looking very focused, and Ernst in the other, who kindly annotated himself, two of the development team members. You might have seen them in the Oracle Social Network Hands-On Lab during the show. There’s a trend here. Not by accident, fun stuff like this has becoming our calling card, e.g. the Kscope 12 WebCenter Rock ‘em Sock ‘em Robots. Not only are these entertaining demonstrations, but they showcase what’s possible with RESTful APIs and get developers noodling on how easy it is to connect real objects to cloud services to fix pain points. I spoke to some great folks from the City of Atlanta about extending the concepts of the flying monkey target to physical asset monitoring. Just take an internet-connected camera with REST APIs like the Dropcam, wire it up to Oracle Social Netwok, and you can hack together a monitoring device for a datacenter or a warehouse. Sure, it’s easier said than done, but we’re a lot closer to that reality than we were even two years ago. Another noteworthy bit from Noel’s interview, beginning at 2:55, is the evolution of social developer. Speaking of, make sure to check out the Oracle Social Developer Community. Look for more on the social developer in the coming months. Noel has become quite the Raspberry Pi evangelist, and why not, it’s a great tool, a low-power Linux machine, cheap ($35!) and highly extensible, perfect for makers and students alike. He attended a meetup on Saturday before OpenWorld, and during the show, I heard him evangelizing the Pi and its capabilities to many people. There is some fantastic innovation forming in that ecosystem, much of it with Java. The OTN gang raffled off five Pis, and I expect to see lots of great stuff in the very near future. Stay tuned this week for posts on all our Challenge entrants. There’s some great innovation you won’t want to miss. Find the comments. Update: I forgot to mention that Noel used Twilio, one of his favorite services, during the show to send out Challenge updates and information to all the contestants.

    Read the article

  • Acr.ExtDirect &ndash; Part 1 &ndash; Method Resolvers

    - by Allan Ritchie
    One of the most important things of any open source libraries in my opinion is to be as open as possible while avoiding having your library become invasive to your code/business model design.  I personally could never stand marking my business and/or data access code with attributes everywhere.  XML also isn’t really a fav with too many people these days since it comes with a startup performance hit and requires runtime compiling.  I find that there is a whole ton of communication libraries out there currently requiring this (ie. WCF, RIA, etc).  Even though Acr.ExtDirect comes with its own set of attributes, you can piggy-back the [ServiceContract] & [OperationContract] attributes from WCF if you choose.  It goes beyond that though, there are 2 others “out-of-the-box” implementations – Convention based & XML Configuration.    Convention – I don’t actually recommend using this one since it opens up all of your public instance methods to remote execution calls. XML Configuration – This isn’t so bad but requires you enter all of your methods and there operation types into the Castle XML configuration & as I said earlier, XML isn’t the fav these days.   So what are your options if you don’t like attributes, convention, or XML Configuration?  Well, Acr.ExtDirect has its own extension base to give the API a list of methods and components to make available for remote execution.  1: public interface IDirectMethodResolver { 2:   3: bool IsServiceType(ComponentModel model, Type type); 4: string GetNamespace(ComponentModel model); 5: string[] GetDirectMethodNames(ComponentModel model); 6: DirectMethodType GetMethodType(ComponentModel model, MethodInfo method); 7: }   Now to implement our own method resolver:   1: public class TestResolver : IDirectMethodResolver { 2:   3: #region IDirectMethodResolver Members 4:   5: /// <summary> 6: /// Determine if you are calling a service 7: /// </summary> 8: /// <param name="model"></param> 9: /// <param name="type"></param> 10: /// <returns></returns> 11: public bool IsServiceType(ComponentModel model, Type type) { 12: return (type.Namespace == "MyBLL.Data"); 13: } 14:   15: /// <summary> 16: /// Return the calling name for the client side 17: /// </summary> 18: /// <param name="model"></param> 19: /// <returns></returns> 20: public string GetNamespace(ComponentModel model) { 21: return model.Name; 22: } 23:   24: public string[] GetDirectMethodNames(ComponentModel model) { 25: switch (model.Name) { 26: case "Products" : 27: return new [] { 28: "GetProducts", 29: "LoadProduct", 30: "Save", 31: "Update" 32: }; 33:   34: case "Categories" : 35: return new [] { 36: "GetProducts" 37: }; 38:   39: default : 40: throw new ArgumentException("Invalid type"); 41: } 42: } 43:   44: public DirectMethodType GetMethodType(ComponentModel model, MethodInfo method) { 45: if (method.Name.StartsWith("Save") || method.Name.StartsWith("Update")) 46: return DirectMethodType.FormSubmit; 47: 48: else if (method.Name.StartsWith("Load")) 49: return DirectMethodType.FormLoad; 50:   51: else 52: return DirectMethodType.Direct; 53: } 54:   55: #endregion 56: }   And there you have it, your own custom method resolver.  Pretty easy and pretty open ended!

    Read the article

  • Silverlight Cream for February 21, 2011 -- #1049

    - by Dave Campbell
    In this Issue: Rob Eisenberg(-2-), Gill Cleeren, Colin Eberhardt, Alex van Beek, Ishai Hachlili, Ollie Riches, Kevin Dockx, WindowsPhoneGeek(-2-), Jesse Liberty(-2-), and John Papa. Above the Fold: Silverlight: "Silverlight 4: Creating useful base classes for your views and viewmodels with PRISM 4" Alex van Beek WP7: "Google Sky on Windows Phone 7" Colin Eberhardt Shoutouts: My friends at SilverlightShow have their top 5 for last week posted: SilverlightShow for Feb 14 - 20, 2011 From SilverlightCream.com: Rob Eisenberg MVVMs Us with Caliburn.Micro! Rob Eisenberg chats with Carl and Richard on .NET Rocks episode 638 about Caliburn.Micro which takes Convention-over-Configuration further, utilizing naming conventions to handle a large number of data binding, validation and other action-based characteristics in your app. Two Caliburn Releases in One Day! Rob Eisenberg also announced that release candidates for both Caliburn 2.0 and Caliburn.Micro 1.0 are now available. Check out the docs and get the bits. Getting ready for Microsoft Silverlight Exam 70-506 (Part 6) Gill Cleeren has Part 6 of his series on getting ready for the Silverlight Exam up at SilverlightShow.... this time out, Gill is discussing app startup, localization, and using resource dictionaries, just to name a few things. Google Sky on Windows Phone 7 Colin Eberhardt has a very cool WP7 app described where he's using Google Sky as the tile source for Bing Maps, and then has a list of 110 Messier Objects.. interesting astronomical objects that you can look at... all with source! Silverlight 4: Creating useful base classes for your views and viewmodels with PRISM 4 Alex van Beek has some Prism4/Unity MVVM goodness up with this discussion of a login module using View and ViewModel base classes. Windows Phone 7 and WCF REST – Authentication Solutions Ishai Hachlili sent me this link to his post about WCF REST web service and authentication for WP7, and he offers up 2 solutions... from the looks of this, I'm also putting his blog on my watch list WP7Contrib: Isolated Storage Cache Provider Ollie Riches has a complete explanation and code example of using the IsolatedStorageCacheProvider in their WP7Contrib library. Using a ChannelFactory in Silverlight, part two: binary cows & new-born calves Kevin Dockx follows-up his post on Channel Factories with this part 2, expanding the knowledge-base into usin parameters and custom binding with binary encoding, both from reader suggestions. All about UriMapping in WP7 WindowsPhoneGeek has a post up about URI mappings in WP7 ... what it is, how to enable it in code behind or XAML, then using it either with a hyperlink button or via the NavigationService class... all with code. Passing WP7 Memory Consumption requirements with the Coding4Fun MemoryCounter tool WindowsPhoneGeek's latest is a tutorial on the use of the Memory Counter control from the Coding4Fun toolkit and WP7 Memory consumption. Getting Started With Linq Jesse Liberty gets into LINQ in his Episode 33 of his WP7 'From Scratch' series... looks like a good LINQ starting point, and he's going to be doing a series on it. Linq with Objects In his second post on LINQ, Jesse Liberty is looking at creating a Linq query against a collection of objects... always good stuff, Jesse! Silverlight TV Silverlight TV 62: The Silverlight 5 Triad Unplugged John Papa is joined by Sam George, Larry Olson, and Vijay Devetha (the Silverlight Triad) on this Silverlight TV episode 62 to discuss how the team works together, and hey... they're hiring! Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • SPARC T5-4 LDoms for RAC and WebLogic Clusters

    - by Jeff Taylor-Oracle
    I wanted to use two Oracle SPARC T5-4 servers to simultaneously host both Oracle RAC and a WebLogic Server Cluster. I chose to use Oracle VM Server for SPARC to create a cluster like this: There are plenty of trade offs and decisions that need to be made, for example: Rather than configuring the system by hand, you might want to use an Oracle SuperCluster T5-8 My configuration is similar to jsavit's: Availability Best Practices - Example configuring a T5-8 but I chose to ignore some of the advice. Maybe I should have included an  alternate service domain, but I decided that I already had enough redundancy Both Oracle SPARC T5-4 servers were to be configured like this: Cntl 0.25  4  64GB                     App LDom                    2.75 CPU's                                        44 cores                                          704 GB              DB LDom      One CPU         16 cores         256 GB   The systems started with everything in the primary domain: # ldm list NAME             STATE      FLAGS   CONS    VCPU  MEMORY   UTIL  NORM  UPTIME primary          active     -n-c--  UART    512   1023G    0.0%  0.0%  11m # ldm list-spconfig factory-default [current] primary # ldm list -o core,memory,physio NAME              primary           CORE     CID    CPUSET     0      (0, 1, 2, 3, 4, 5, 6, 7)     1      (8, 9, 10, 11, 12, 13, 14, 15)     2      (16, 17, 18, 19, 20, 21, 22, 23) -- SNIP     62     (496, 497, 498, 499, 500, 501, 502, 503)     63     (504, 505, 506, 507, 508, 509, 510, 511) MEMORY     RA               PA               SIZE                 0x30000000       0x30000000       255G     0x80000000000    0x80000000000    256G     0x100000000000   0x100000000000   256G     0x180000000000   0x180000000000   256G # Give this memory block to the DB LDom IO     DEVICE                           PSEUDONYM        OPTIONS     pci@300                          pci_0                pci@340                          pci_1                pci@380                          pci_2                pci@3c0                          pci_3                pci@400                          pci_4                pci@440                          pci_5                pci@480                          pci_6                pci@4c0                          pci_7                pci@300/pci@1/pci@0/pci@6        /SYS/RCSA/PCIE1     pci@300/pci@1/pci@0/pci@c        /SYS/RCSA/PCIE2     pci@300/pci@1/pci@0/pci@4/pci@0/pci@c /SYS/MB/SASHBA0     pci@300/pci@1/pci@0/pci@4/pci@0/pci@8 /SYS/RIO/NET0        pci@340/pci@1/pci@0/pci@6        /SYS/RCSA/PCIE3     pci@340/pci@1/pci@0/pci@c        /SYS/RCSA/PCIE4     pci@380/pci@1/pci@0/pci@a        /SYS/RCSA/PCIE9     pci@380/pci@1/pci@0/pci@4        /SYS/RCSA/PCIE10     pci@3c0/pci@1/pci@0/pci@e        /SYS/RCSA/PCIE11     pci@3c0/pci@1/pci@0/pci@8        /SYS/RCSA/PCIE12     pci@400/pci@1/pci@0/pci@e        /SYS/RCSA/PCIE5     pci@400/pci@1/pci@0/pci@8        /SYS/RCSA/PCIE6     pci@440/pci@1/pci@0/pci@e        /SYS/RCSA/PCIE7     pci@440/pci@1/pci@0/pci@8        /SYS/RCSA/PCIE8     pci@480/pci@1/pci@0/pci@a        /SYS/RCSA/PCIE13     pci@480/pci@1/pci@0/pci@4        /SYS/RCSA/PCIE14     pci@4c0/pci@1/pci@0/pci@8        /SYS/RCSA/PCIE15     pci@4c0/pci@1/pci@0/pci@4        /SYS/RCSA/PCIE16     pci@4c0/pci@1/pci@0/pci@c/pci@0/pci@c /SYS/MB/SASHBA1     pci@4c0/pci@1/pci@0/pci@c/pci@0/pci@4 /SYS/RIO/NET2    Added an additional service processor configuration: # ldm add-spconfig split # ldm list-spconfig factory-default primary split [current] And removed many of the resources from the primary domain: # ldm start-reconf primary # ldm set-core 4 primary # ldm set-memory 32G primary # ldm rm-io pci@340 primary # ldm rm-io pci@380 primary # ldm rm-io pci@3c0 primary # ldm rm-io pci@400 primary # ldm rm-io pci@440 primary # ldm rm-io pci@480 primary # ldm rm-io pci@4c0 primary # init 6 Needed to add resources to the guest domains: # ldm add-domain db # ldm set-core cid=`seq -s"," 48 63` db # ldm add-memory mblock=0x180000000000:256G db # ldm add-io pci@480 db # ldm add-io pci@4c0 db # ldm add-domain app # ldm set-core 44 app # ldm set-memory 704G  app # ldm add-io pci@340 app # ldm add-io pci@380 app # ldm add-io pci@3c0 app # ldm add-io pci@400 app # ldm add-io pci@440 app Needed to set up services: # ldm add-vds primary-vds0 primary # ldm add-vcc port-range=5000-5100 primary-vcc0 primary Needed to add a virtual network port for the WebLogic application domain: # ipadm NAME              CLASS/TYPE STATE        UNDER      ADDR lo0               loopback   ok           --         --    lo0/v4         static     ok           --         ...    lo0/v6         static     ok           --         ... net0              ip         ok           --         ...    net0/v4        static     ok           --         xxx.xxx.xxx.xxx/24    net0/v6        addrconf   ok           --         ....    net0/v6        addrconf   ok           --         ... net8              ip         ok           --         --    net8/v4        static     ok           --         ... # dladm show-phys LINK              MEDIA                STATE      SPEED  DUPLEX    DEVICE net1              Ethernet             unknown    0      unknown   ixgbe1 net0              Ethernet             up         1000   full      ixgbe0 net8              Ethernet             up         10     full      usbecm2 # ldm add-vsw net-dev=net0 primary-vsw0 primary # ldm add-vnet vnet1 primary-vsw0 app Needed to add a virtual disk to the WebLogic application domain: # format Searching for disks...done AVAILABLE DISK SELECTIONS:        0. c0t5000CCA02505F874d0 <HITACHI-H106060SDSUN600G-A2B0-558.91GB>           /scsi_vhci/disk@g5000cca02505f874           /dev/chassis/SPARC_T5-4.AK00084038/SYS/SASBP0/HDD0/disk        1. c0t5000CCA02506C468d0 <HITACHI-H106060SDSUN600G-A2B0-558.91GB>           /scsi_vhci/disk@g5000cca02506c468           /dev/chassis/SPARC_T5-4.AK00084038/SYS/SASBP0/HDD1/disk        2. c0t5000CCA025067E5Cd0 <HITACHI-H106060SDSUN600G-A2B0-558.91GB>           /scsi_vhci/disk@g5000cca025067e5c           /dev/chassis/SPARC_T5-4.AK00084038/SYS/SASBP0/HDD2/disk        3. c0t5000CCA02506C258d0 <HITACHI-H106060SDSUN600G-A2B0-558.91GB>           /scsi_vhci/disk@g5000cca02506c258           /dev/chassis/SPARC_T5-4.AK00084038/SYS/SASBP0/HDD3/disk Specify disk (enter its number): ^C # ldm add-vdsdev /dev/dsk/c0t5000CCA02506C468d0s2 HDD1@primary-vds0 # ldm add-vdisk HDD1 HDD1@primary-vds0 app Add some additional spice to the pot: # ldm set-variable auto-boot\\?=false db # ldm set-variable auto-boot\\?=false app # ldm set-var boot-device=HDD1 app Bind the logical domains: # ldm bind db # ldm bind app At the end of the process, the system is set up like this: # ldm list -o core,memory,physio NAME             primary          CORE     CID    CPUSET     0      (0, 1, 2, 3, 4, 5, 6, 7)     1      (8, 9, 10, 11, 12, 13, 14, 15)     2      (16, 17, 18, 19, 20, 21, 22, 23)     3      (24, 25, 26, 27, 28, 29, 30, 31) MEMORY     RA               PA               SIZE                0x30000000       0x30000000       32G IO     DEVICE                           PSEUDONYM        OPTIONS     pci@300                          pci_0               pci@300/pci@1/pci@0/pci@6        /SYS/RCSA/PCIE1     pci@300/pci@1/pci@0/pci@c        /SYS/RCSA/PCIE2     pci@300/pci@1/pci@0/pci@4/pci@0/pci@c /SYS/MB/SASHBA0     pci@300/pci@1/pci@0/pci@4/pci@0/pci@8 /SYS/RIO/NET0   ------------------------------------------------------------------------------ NAME             app              CORE     CID    CPUSET     4      (32, 33, 34, 35, 36, 37, 38, 39)     5      (40, 41, 42, 43, 44, 45, 46, 47)     6      (48, 49, 50, 51, 52, 53, 54, 55)     7      (56, 57, 58, 59, 60, 61, 62, 63)     8      (64, 65, 66, 67, 68, 69, 70, 71)     9      (72, 73, 74, 75, 76, 77, 78, 79)     10     (80, 81, 82, 83, 84, 85, 86, 87)     11     (88, 89, 90, 91, 92, 93, 94, 95)     12     (96, 97, 98, 99, 100, 101, 102, 103)     13     (104, 105, 106, 107, 108, 109, 110, 111)     14     (112, 113, 114, 115, 116, 117, 118, 119)     15     (120, 121, 122, 123, 124, 125, 126, 127)     16     (128, 129, 130, 131, 132, 133, 134, 135)     17     (136, 137, 138, 139, 140, 141, 142, 143)     18     (144, 145, 146, 147, 148, 149, 150, 151)     19     (152, 153, 154, 155, 156, 157, 158, 159)     20     (160, 161, 162, 163, 164, 165, 166, 167)     21     (168, 169, 170, 171, 172, 173, 174, 175)     22     (176, 177, 178, 179, 180, 181, 182, 183)     23     (184, 185, 186, 187, 188, 189, 190, 191)     24     (192, 193, 194, 195, 196, 197, 198, 199)     25     (200, 201, 202, 203, 204, 205, 206, 207)     26     (208, 209, 210, 211, 212, 213, 214, 215)     27     (216, 217, 218, 219, 220, 221, 222, 223)     28     (224, 225, 226, 227, 228, 229, 230, 231)     29     (232, 233, 234, 235, 236, 237, 238, 239)     30     (240, 241, 242, 243, 244, 245, 246, 247)     31     (248, 249, 250, 251, 252, 253, 254, 255)     32     (256, 257, 258, 259, 260, 261, 262, 263)     33     (264, 265, 266, 267, 268, 269, 270, 271)     34     (272, 273, 274, 275, 276, 277, 278, 279)     35     (280, 281, 282, 283, 284, 285, 286, 287)     36     (288, 289, 290, 291, 292, 293, 294, 295)     37     (296, 297, 298, 299, 300, 301, 302, 303)     38     (304, 305, 306, 307, 308, 309, 310, 311)     39     (312, 313, 314, 315, 316, 317, 318, 319)     40     (320, 321, 322, 323, 324, 325, 326, 327)     41     (328, 329, 330, 331, 332, 333, 334, 335)     42     (336, 337, 338, 339, 340, 341, 342, 343)     43     (344, 345, 346, 347, 348, 349, 350, 351)     44     (352, 353, 354, 355, 356, 357, 358, 359)     45     (360, 361, 362, 363, 364, 365, 366, 367)     46     (368, 369, 370, 371, 372, 373, 374, 375)     47     (376, 377, 378, 379, 380, 381, 382, 383) MEMORY     RA               PA               SIZE                0x30000000       0x830000000      192G     0x4000000000     0x80000000000    256G     0x8080000000     0x100000000000   256G IO     DEVICE                           PSEUDONYM        OPTIONS     pci@340                          pci_1               pci@380                          pci_2               pci@3c0                          pci_3               pci@400                          pci_4               pci@440                          pci_5               pci@340/pci@1/pci@0/pci@6        /SYS/RCSA/PCIE3     pci@340/pci@1/pci@0/pci@c        /SYS/RCSA/PCIE4     pci@380/pci@1/pci@0/pci@a        /SYS/RCSA/PCIE9     pci@380/pci@1/pci@0/pci@4        /SYS/RCSA/PCIE10     pci@3c0/pci@1/pci@0/pci@e        /SYS/RCSA/PCIE11     pci@3c0/pci@1/pci@0/pci@8        /SYS/RCSA/PCIE12     pci@400/pci@1/pci@0/pci@e        /SYS/RCSA/PCIE5     pci@400/pci@1/pci@0/pci@8        /SYS/RCSA/PCIE6     pci@440/pci@1/pci@0/pci@e        /SYS/RCSA/PCIE7     pci@440/pci@1/pci@0/pci@8        /SYS/RCSA/PCIE8 ------------------------------------------------------------------------------ NAME             db               CORE     CID    CPUSET     48     (384, 385, 386, 387, 388, 389, 390, 391)     49     (392, 393, 394, 395, 396, 397, 398, 399)     50     (400, 401, 402, 403, 404, 405, 406, 407)     51     (408, 409, 410, 411, 412, 413, 414, 415)     52     (416, 417, 418, 419, 420, 421, 422, 423)     53     (424, 425, 426, 427, 428, 429, 430, 431)     54     (432, 433, 434, 435, 436, 437, 438, 439)     55     (440, 441, 442, 443, 444, 445, 446, 447)     56     (448, 449, 450, 451, 452, 453, 454, 455)     57     (456, 457, 458, 459, 460, 461, 462, 463)     58     (464, 465, 466, 467, 468, 469, 470, 471)     59     (472, 473, 474, 475, 476, 477, 478, 479)     60     (480, 481, 482, 483, 484, 485, 486, 487)     61     (488, 489, 490, 491, 492, 493, 494, 495)     62     (496, 497, 498, 499, 500, 501, 502, 503)     63     (504, 505, 506, 507, 508, 509, 510, 511) MEMORY     RA               PA               SIZE                0x80000000       0x180000000000   256G IO     DEVICE                           PSEUDONYM        OPTIONS     pci@480                          pci_6               pci@4c0                          pci_7               pci@480/pci@1/pci@0/pci@a        /SYS/RCSA/PCIE13     pci@480/pci@1/pci@0/pci@4        /SYS/RCSA/PCIE14     pci@4c0/pci@1/pci@0/pci@8        /SYS/RCSA/PCIE15     pci@4c0/pci@1/pci@0/pci@4        /SYS/RCSA/PCIE16     pci@4c0/pci@1/pci@0/pci@c/pci@0/pci@c /SYS/MB/SASHBA1     pci@4c0/pci@1/pci@0/pci@c/pci@0/pci@4 /SYS/RIO/NET2   Start the domains: # ldm start app LDom app started # ldm start db LDom db started Make sure to start the vntsd service that was created, above. # svcs -a | grep ldo disabled        8:38:38 svc:/ldoms/vntsd:default online          8:38:58 svc:/ldoms/agents:default online          8:39:25 svc:/ldoms/ldmd:default # svcadm enable vntsd Now use the MAC address to configure the Solaris 11 Automated Installation. Database Logical Domain # telnet localhost 5000 {0} ok devalias screen                   /pci@4c0/pci@1/pci@0/pci@c/pci@0/pci@7/display@0 disk7                    /pci@4c0/pci@1/pci@0/pci@c/pci@0/pci@c/scsi@0/disk@p3 disk6                    /pci@4c0/pci@1/pci@0/pci@c/pci@0/pci@c/scsi@0/disk@p2 disk5                    /pci@4c0/pci@1/pci@0/pci@c/pci@0/pci@c/scsi@0/disk@p1 disk4                    /pci@4c0/pci@1/pci@0/pci@c/pci@0/pci@c/scsi@0/disk@p0 scsi1                    /pci@4c0/pci@1/pci@0/pci@c/pci@0/pci@c/scsi@0 net3                     /pci@4c0/pci@1/pci@0/pci@c/pci@0/pci@4/network@0,1 net2                     /pci@4c0/pci@1/pci@0/pci@c/pci@0/pci@4/network@0 virtual-console          /virtual-devices/console@1 name                     aliases {0} ok boot net2 Boot device: /pci@4c0/pci@1/pci@0/pci@c/pci@0/pci@4/network@0  File and args: 1000 Mbps full duplex Link up Requesting Internet Address for xx:xx:xx:xx:xx:xx Requesting Internet Address for xx:xx:xx:xx:xx:xx WLS Logical Domain # telnet localhost 5001 {0} ok devalias hdd1                     /virtual-devices@100/channel-devices@200/disk@0 vnet1                    /virtual-devices@100/channel-devices@200/network@0 net                      /virtual-devices@100/channel-devices@200/network@0 disk                     /virtual-devices@100/channel-devices@200/disk@0 virtual-console          /virtual-devices/console@1 name                     aliases {0} ok boot net Boot device: /virtual-devices@100/channel-devices@200/network@0  File and args: Requesting Internet Address for xx:xx:xx:xx:xx:xx Requesting Internet Address for xx:xx:xx:xx:xx:xx Repeat the process for the second SPARC T5-4, install Solaris, RAC and WebLogic Cluster, and you are ready to go. Maybe buying a SuperCluster would have been easier.

    Read the article

  • Math with Timestamp

    - by Knut Vatsendvik
    table.sql { border-width: 1px; border-spacing: 2px; border-style: dashed; border-color: #0023ff; border-collapse: separate; background-color: white; } table.sql th { border-width: 1px; padding: 1px; border-style: none; border-color: gray; background-color: white; -moz-border-radius: 0px 0px 0px 0px; } table.sql td { border-width: 1px; padding: 3px; border-style: none; border-color: gray; background-color: white; -moz-border-radius: 0px 0px 0px 0px; } .sql-keyword { color: #0000cd; background-color: inherit; } .sql-result { color: #458b74; background-color: inherit; } Got this little SQL quiz from a colleague.  How to add or subtract exactly 1 second from a Timestamp?  Sounded simple enough at first blink, but was a bit trickier than expected. If the data type had been a Date, we knew that we could add or subtract days, minutes or seconds using + or – sysdate + 1 to add one day sysdate - (1 / 24) to subtract one hour sysdate + (1 / 86400) to add one second Would the same arithmetic work with Timestamp as with Date? Let’s test it out with the following query SELECT   systimestamp , systimestamp + (1 / 86400) FROM dual; ---------- 03.05.2010 22.11.50,240887 +02:00 03.05.2010 The first result line shows us the system time down to fractions of seconds. The second result line shows the result as Date (as used for date calculation) meaning now that the granularity is reduced down to a second.   By using the PL/SQL dump() function, we can confirm this with the following query SELECT   dump(systimestamp) , dump(systimestamp + (1 / 86400)) FROM dual; ---------- Typ=188 Len=20: 218,7,5,4,8,53,9,0,200,46,89,20,2,0,5,0,0,0,0,0 Typ=13 Len=8: 218,7,5,4,10,53,10,0 Where typ=13 is a runtime representation for Date. So how can we increase the precision to include fractions of second? After investigating it a bit, we found out that the interval data type INTERVAL DAY TO SECOND could be used with the result of addition or subtraction being a Timestamp. Let’s try again our first query again, now using the interval data type. SELECT systimestamp,    systimestamp + INTERVAL '0 00:00:01.0' DAY TO SECOND(1) FROM dual; ---------- 03.05.2010 22.58.32,723659000 +02:00 03.05.2010 22.58.33,723659000 +02:00 Yes, it worked! To finish the story, here is one example showing how to specify an interval of 2 days, 6 hours, 30 minutes, 4 seconds and 111 thousands of a second. INTERVAL ‘2 6:30:4.111’ DAY TO SECOND(3)

    Read the article

  • Can't boot in Ubuntu after windows upgrade

    - by VanceAnce
    After my lastest update for Ubuntu and Windows XP, I got a Grub error on booting the next day. ls lists the following (without () ): sd0 sd1, msdos sd2 sd5 sd6 When I tried to get into one with (sd0,xy)/ it doesn't detect system or unknown file system error. I tried to boot to a live session with a Knoppix live CD and found out that all data exists. I also tried to recover with TestDisk and it finds all systems. Here is the test disk result: Start End Size in sectors 1 * HPFS - NTFS 0 1 1 7079 254 63 113740137 2 E extended LBA 7080 0 1 12161 254 63 81642330 5 L HPFS - NTFS 7080 1 1 10266 254 63 51199092 [Schule] X extended 12031 30 1 12161 254 63 2102625 6 L Linux Swap 12031 31 33 12161 254 63 2102530 I've 1 winxp-home, 1x Ubuntu (ext3+swap) and 1 winxp prof and then I wrote on mbr with TestDisk but I always get the same errors with Grub. What should I do? I need both XP and Ubuntu. Help me please. more infos in answers below - sry for thos confusing style but im working on diff live system and browsers and have to reboot always the boot info script output is also down below maybe an advanced user can correct my fail posting - after i can solve my issuse i will register here thanx and pls help me with those weired issues ! as i still cant just comment my own answer or those on top i again has to put it here as a sepperate answer..... (or even edit - maybe an browser failur using the live cds ... cause this posti can edit) here the bootinfo script output - but the result is the same as with TestDisk ... but it looks worse - cause it also doesnt detect my old ubuntu ... but there wasnt a eares process or overwrite process visibile ending the last working session output: Boot Info Script 0.61 [1 April 2012] ============================= Boot Info Summary: =============================== = Syslinux MBR (4.04 and higher) is installed in the MBR of /dev/sda. sda1: __________________________________________ File system: ntfs Boot sector type: Windows XP: NTFS Boot sector info: No errors found in the Boot Parameter Block. Operating System: Windows XP Boot files: /boot.ini /ntldr /NTDETECT.COM sda2: __________________________________________ File system: Extended Partition Boot sector type: - Boot sector info: sda5: __________________________________________ File system: ntfs Boot sector type: Windows XP: NTFS Boot sector info: According to the info in the boot sector, sda5 starts at sector 63. Operating System: Windows XP Boot files: sda6: __________________________________________ File system: swap Boot sector type: - Boot sector info: ============================ Drive/Partition Info: ============================= Drive: sda _______________________________________ Disk /dev/sda: 100.0 GB, 100030242816 bytes 255 heads, 63 sectors/track, 12161 cylinders, total 195371568 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start Sector End Sector # of Sectors Id System /dev/sda1 * 63 113,740,199 113,740,137 7 NTFS / exFAT / HPFS /dev/sda2 113,740,200 195,382,529 81,642,330 f W95 Extended (LBA) /dev/sda5 113,740,263 164,939,354 51,199,092 7 NTFS / exFAT / HPFS /dev/sda6 193,280,000 195,382,529 2,102,530 82 Linux swap / Solaris /dev/sda2 ends after the last sector of /dev/sda /dev/sda6 ends after the last sector of /dev/sda "blkid" output: ____________________________________ Device UUID TYPE LABEL /dev/loop0 squashfs /dev/sda1 6596D86768011128 ntfs /dev/sda5 1300D3B7744EC141 ntfs Schule /dev/sda6 5b95f2a1-4145-43a5-ac51-41d7dd32b213 swap ================================ Mount points: ================================= Device Mount_Point Type Options /dev/loop0 /rofs squashfs (ro,noatime) /dev/sr0 /cdrom iso9660 (ro,noatime) ================================ sda1/boot.ini: ================================ [boot loader] timeout=30 default=multi(0)disk(0)rdisk(0)partition(1)\WINDOWS [operating systems] multi(0)disk(0)rdisk(0)partition(1)\WINDOWS="Microsoft Windows XP Home Edition" /fastdetect /NoExecute=OptOut multi(0)disk(0)rdisk(0)partition(2)\WINDOWS="Microsoft Windows XP Professional" /fastdetect [spybotsd] timeout.old=30 the last part shows that i now use the windows boot loader so that i can acces at least one OS but shouldnt i also get acces to my ubuntu partitions with live-linux-cds ? or do i have to boot with grub to get to those files only ??

    Read the article

  • How to resize / enlarge / grow a non-LVM ext4 partition

    - by Mischa
    I have already searched the forums, but couldnt find a good suitable answer: I have an Ubuntu Server 10.04 as KVM Host and a guest system, that also runs 10.04. The host system uses LVM and there are three logical volumes, which are provided to the guest as virtual block devices - one for /, one for /home and one for swap. The guest had been partitioned without LVM. I have already enlarged the logical volume in the host system - the guest successfully sees the bigger virtual disk. However, this virtual disk contains one "good old" partition, which still has the old small size. The output of fdisk -l is me@produktion:/$ LC_ALL=en_US sudo fdisk -l Disk /dev/vda: 32.2 GB, 32212254720 bytes 255 heads, 63 sectors/track, 3916 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000c8ce7 Device Boot Start End Blocks Id System /dev/vda1 * 1 3917 31455232 83 Linux Disk /dev/vdb: 2147 MB, 2147483648 bytes 244 heads, 47 sectors/track, 365 cylinders Units = cylinders of 11468 * 512 = 5871616 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000f2bf7 Device Boot Start End Blocks Id System /dev/vdb1 1 366 2095104 82 Linux swap / Solaris Partition 1 has different physical/logical beginnings (non-Linux?): phys=(0, 32, 33) logical=(0, 43, 28) Partition 1 has different physical/logical endings: phys=(260, 243, 47) logical=(365, 136, 44) Disk /dev/vdc: 225.5 GB, 225485783040 bytes 255 heads, 63 sectors/track, 27413 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00027f25 Device Boot Start End Blocks Id System /dev/vdc1 1 9138 73398272 83 Linux The output of parted print all is Model: Virtio Block Device (virtblk) Disk /dev/vda: 32.2GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 32.2GB 32.2GB primary ext4 boot Model: Virtio Block Device (virtblk) Disk /dev/vdb: 2147MB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 2146MB 2145MB primary linux-swap(v1) Model: Virtio Block Device (virtblk) Disk /dev/vdc: 225GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 75.2GB 75.2GB primary ext4 What I want to achieve is to simply grow or resize the partition /dev/vdc1 so that it uses the whole space provided by the virtual block device /dev/vdc. The problem is, that when I try to do that with parted, it complains: (parted) select /dev/vdc Using /dev/vdc (parted) print Model: Virtio Block Device (virtblk) Disk /dev/vdc: 225GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 75.2GB 75.2GB primary ext4 (parted) resize 1 WARNING: you are attempting to use parted to operate on (resize) a file system. parted's file system manipulation code is not as robust as what you'll find in dedicated, file-system-specific packages like e2fsprogs. We recommend you use parted only to manipulate partition tables, whenever possible. Support for performing most operations on most types of file systems will be removed in an upcoming release. Start? [1049kB]? End? [75.2GB]? 224GB Error: File system has an incompatible feature enabled. Compatible features are has_journal, dir_index, filetype, sparse_super and large_file. Use tune2fs or debugfs to remove features. So what can I do? This is a headless production system. What is a safe way to grow this partition? I CAN unmount it, though - so this is not the problem.

    Read the article

  • Screen Aspect Ratio

    - by Bill Evjen
    Jeffrey Dean, Pixar Aspect Ratio is very important to home video. What is aspect ratio – the ratio from the height to the width 2.35:1 The image is 2.35 times wide as it is high Pixar uses this for half of our movies This is called a widescreen image When modified to fit your television screen They cut this to fit the box of your screen When a comparison is made huge chunks of picture is missing It is harder to find what is going on when these pieces are missing The whole is greater than the pieces themselves. If you are missing pieces – you are missing the movie The soul and the mood is in the film shots. Cutting it to fit a screen, you are losing 30% of the movie Why different aspect ratios? Film before the 1950s 1.33:1 Academy Standard There were all aspects of images though. There was no standard. Thomas Edison developed projecting images onto a wall/screen He didn’t patent it as he saw no value in it. Then 1.37:1 came about to add a strip of sound This is the same size as a 35mm film Around 1952 – TV comes along NTSC Television followed the Academy Standard (4x3) Once TV came out, movie theater attendance plummets So Film brought forth color to combat this. Also early 3D Also Widescreen was brought forth. Cinema-Scope Studios at the time made movies bigger and bigger There was a Napoleon movie that was actually 4x1 … really wide. 1.85:1 Academy Flat 2.35:1 Anamorphic Scope (aka Panavision/Cinemascope) Almost all movies are made in these two aspect ratios Pixar has done half in one and half in the other Why choose one over the other? Artist choice It is part of the story the director wants to tell Can we preserve the story outside of the theaters? TVs before 1998 – they were very square Now TVs are very wide Historical options Toy Story released as it was and people cut it in a way that wasn’t liked by the studio Pan and Scan is another option Cut and then scan left or right depending on where the action is Frame Height Pixar can go back and animate more picture to account for the bottom/top bars. You end up with more sky and more ground The characters seem to get lost in the picture You lose what the director original intended Re-staging For animated movies, you can move characters around – restage the scene. It is a new completely different version of the film This is the best possible option that Pixar came up with They have stopped doing this really as the demand as pretty much dropped off Why not 1.33 today? There has been an evolution of taste and demands. VHS is a linear item The focus is about portability and not about quality Most was pan and scan and the quality was so bad – but people didn’t notice DVD was introduced in 1996 You could have more content – two versions of the film You could have the widescreen version and the 1.33 version People realized that they are seeing more of the movie with the widescreen High Def Televisions (16x9 monitors) This was introduced in 2005 Blu-ray Disc was introduced in 2006 This is all widescreen You cannot find a square TV anymore TVs are roughly 1.85:1 aspect ratio There is a change in demand Users are used to black bars and are used to widescreen Users are educated now What’s next for in-flight entertainment? High Def IFE Personal Electronic Devices 3D inflight

    Read the article

  • Let Me Show You Something: Instagram, Vine and Snapchat for Brands

    - by Mike Stiles
    While brands are well aware of how much more impactful images are than text-only posts on social channels, today you’re additionally being presented with platform after additional platform for hosting, doctoring and sharing photos and videos.  Can you play in every sandbox? And if you do, can you be brilliant on all of them? As has usually been the case, so far brands are sticking their toes into new platforms while not actually committing to them, or strategizing for them, or resourcing them. TrackMaven found of the 123 F500 companies using Instagram, only 22% of them are active on it. Likewise, research from Simply Measured found brands are indeed jumping in, with the number establishing a presence on Instagram up 55% over the past year. Users want them there…brand engagement has exploded 350%, and over 1/3 of the top brands have at least 10,000 followers. BUT…the top 10 brands are generating 33% of all posts, reaping 83% of all engagement. Things are also growing on Twitter’s Vine, the 6-second looping video app that hit 40 million users in August. The 7th Chamber says 5 tweets a second contain a Vine link. Other studies say branded Vines are 4 times more likely to be shared and seen than rank-and-file branded videos. Why? Users know that even if a video is pure junk, they won’t get robbed of too much of their valuable time. Vine is always upgrading so you can make sure your videos are worth viewers’ time. You can now edit videos, and save & work on several projects concurrently. What you can’t do is upload a finely crafted video into Vine, but you can do that with Instagram. The key to success? Same as with all other content; make it of value. Deliver a laugh or a lesson or both. How-to, behind the scenes peeks, contests, demos, all make sense in the short video format. Or follow Nash Grier’s example, which is to just have fun with and connect to your viewers, earning their trust that your next Vine will be as good as the last. Nash is only 15, has over 1.4 million followers, and adds about 100,000 a week. He broke out when one of his videos was re-Vined by some other kid with 300,000 followers. Make good stuff, get it in front of influencers, and your brand Vines could break out as well. Then there’s Snapchat, the “this photo will self destruct” platform. How can that be of use to brands besides offering coupons that really expire? The jury is out. But with an audience of over 100 million and a valuation of $800 million, media-with-a-time-limit is compelling. Now there’s “Snapchat Stories” that can last 24 hours and be shared to the public at large. You might be able to capitalize on how much more focus gets put on content when there’s a time limit on its availability. The underlying truth to all of this is, these are all tools. Very cool, feature rich tools, but tools. You can give the exact same art kit to 5 different people and you’d get back 5 very different works, ranging from worthless garbage to masterpiece. Brands are being called upon to be still and moving image artists. That’s what your customers are used to seeing, from a variety of sources. Commit to communicating with them accordingly. @mikestiles Photo: stock.xchng

    Read the article

  • T-SQL (SCD) Slowly Changing Dimension Type 2 using a merge statement

    - by AtulThakor
    Working on stored procedure recently which loads records into a data warehouse I found that the existing record was being expired using an update statement followed by an insert to add the new active record. Playing around with the merge statement you can actually expire the current record and insert a new record within one clean statement. This is how the statement works, we do the normal merge statement to insert a record when there is no match, if we match the record we update the existing record by expiring it and deactivating. At the end of the merge statement we use the output statement to output the staging values for the update,  we wrap the whole merge statement within an insert statement and add new rows for the records which we inserted. I’ve added the full script at the bottom so you can paste it and play around.   1: INSERT INTO ExampleFactUpdate 2: (PolicyID, 3: Status) 4: SELECT -- these columns are returned from the output statement 5: PolicyID, 6: Status 7: FROM 8: ( 9: -- merge statement on unique id in this case Policy_ID 10: MERGE dbo.ExampleFactUpdate dp 11: USING dbo.ExampleStag s 12: ON dp.PolicyID = s.PolicyID 13: WHEN NOT MATCHED THEN -- when we cant match the record we insert a new record record and this is all that happens 14: INSERT (PolicyID,Status) 15: VALUES (s.PolicyID, s.Status) 16: WHEN MATCHED --if it already exists 17: AND ExpiryDate IS NULL -- and the Expiry Date is null 18: THEN 19: UPDATE 20: SET 21: dp.ExpiryDate = getdate(), --we set the expiry on the existing record 22: dp.Active = 0 -- and deactivate the existing record 23: OUTPUT $Action MergeAction, s.PolicyID, s.Status -- the output statement returns a merge action which can 24: ) MergeOutput -- be insert/update/delete, on our example where a record has been updated (or expired in our case 25: WHERE -- we'll filter using a where clause 26: MergeAction = 'Update'; -- here   Complete source for example 1: if OBJECT_ID('ExampleFactUpdate') > 0 2: drop table ExampleFactUpdate 3:  4: Create Table ExampleFactUpdate( 5: ID int identity(1,1), 3: go 6: PolicyID varchar(100), 7: Status varchar(100), 8: EffectiveDate datetime default getdate(), 9: ExpiryDate datetime, 10: Active bit default 1 11: ) 12:  13:  14: insert into ExampleFactUpdate( 15: PolicyID, 16: Status) 17: select 18: 1, 19: 'Live' 20:  21: /*Create Staging Table*/ 22: if OBJECT_ID('ExampleStag') > 0 23: drop table ExampleStag 24: go 25:  26: /*Create example fact table */ 27: Create Table ExampleStag( 28: PolicyID varchar(100), 29: Status varchar(100)) 30:  31: --add some data 32: insert into ExampleStag( 33: PolicyID, 34: Status) 35: select 36: 1, 37: 'Lapsed' 38: union all 39: select 40: 2, 41: 'Quote' 42:  43: select * 44: from ExampleFactUpdate 45:  46: select * 47: from ExampleStag 48:  49:  50: INSERT INTO ExampleFactUpdate 51: (PolicyID, 52: Status) 53: SELECT -- these columns are returned from the output statement 54: PolicyID, 55: Status 56: FROM 57: ( 58: -- merge statement on unique id in this case Policy_ID 59: MERGE dbo.ExampleFactUpdate dp 60: USING dbo.ExampleStag s 61: ON dp.PolicyID = s.PolicyID 62: WHEN NOT MATCHED THEN -- when we cant match the record we insert a new record record and this is all that happens 63: INSERT (PolicyID,Status) 64: VALUES (s.PolicyID, s.Status) 65: WHEN MATCHED --if it already exists 66: AND ExpiryDate IS NULL -- and the Expiry Date is null 67: THEN 68: UPDATE 69: SET 70: dp.ExpiryDate = getdate(), --we set the expiry on the existing record 71: dp.Active = 0 -- and deactivate the existing record 72: OUTPUT $Action MergeAction, s.PolicyID, s.Status -- the output statement returns a merge action which can 73: ) MergeOutput -- be insert/update/delete, on our example where a record has been updated (or expired in our case 74: WHERE -- we'll filter using a where clause 75: MergeAction = 'Update'; -- here 76:  77:  78: select * 79: from ExampleFactUpdate 80: 

    Read the article

  • PHP code works on Chrome, but not on Firefox or IE (send email via HTML form) [on hold]

    - by Cachirro
    My brother has this form: <form id="lista" action="lista2.php" method="post"> <input name="cf_name" type="text" size="50" hidden="yes" class="obscure"> <input name="cf_email" type="text" size="50" hidden="yes" class="obscure"> <textarea name="cf_message" cols="45" rows="10" hidden="yes" class="obscure"> </textarea> <input type="image" name="submit" value="Enviar Lista por Email" src="imagens/lista_email.png" width="40" height="40" onclick="this.form.elements['cf_message'].value = lista_mail;this.form.elements['cf_name'].value = prompt('Escreva o seu nome:', '');this.form.elements['cf_email'].value = prompt('Escreva o seu email:', '');"> <input name="submit2" type="submit" value="Enviar" hidden="yes" class="obscure"> </form> That calls this PHP file: <?php if ( isset($_POST['submit']) ) { // Dados de autenticacao SMTP $smtpinfo['host'] = 'localhost'; $smtpinfo['port'] = '25'; $smtpinfo['auth'] = true; $smtpinfo['username'] = 'xxx'; $smtpinfo['password'] = 'xxx'; // Dados recebidos do formulario $nome = $_POST['cf_name']; $email = $_POST['cf_email']; $mensagem = $_POST['cf_message']; // Inclusão de ficheiro PEAR. Certifique-se que o PEAR está activado no seu alojamento require_once "Mail.php"; // Corpo da mensagem $body = "Nome: ".$nome; $body.= "\n\n"; $body.= nl2br($mensagem); $headers = array ('From' => $email, 'To' => $smtpinfo["username"], 'Subject' => 'Encomenda Website'); $mail_object = Mail::factory('smtp', $smtpinfo); $mail = $mail_object->send($smtpinfo["username"], $headers, $body); if ( PEAR::isError($mail) ) { echo ("<p>" . $mail->getMessage() . "</p>"); } else { echo ('<b><font color="FFFF00">Mensagem enviada com sucesso.<br><br></b>Seu email: ' . $email . '<br><br></font>'); }} ?> This basically sends an email with some selected products, name and email. The problem is that it works perfectly on Chrome, but not on FF or IE. When the submit image is pressed, the URL changes to the PHP file, but it displays a blank page. After display errors activated: ini_set('display_errors',1); ini_set('display_startup_errors',1); error_reporting(-1) FF/IE display blank page and email isn't sent, Chrome sends the email and displays this: Strict Standards: Non-static method Mail::factory() should not be called statically in /var/www/vhosts/[site url]/httpdocs/lista2.php on line 33 Strict Standards: Non-static method PEAR::isError() should not be called statically, assuming $this from incompatible context in /usr/share/php/Mail/smtp.php , dont know if it helps So, what is causing the email to be sent on chrome and not on FF or IE? Thank you.

    Read the article

  • lvm disappeared after disc replacement on raid10

    - by user142295
    here my problem: I am running ubuntu 12.04 on a raid10 (4 disks), on top of which I installed an lvm with two volume groups (one for /, one for /home). The layout of the disks are as follows: Disk /dev/sda: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0003f3b6 Device Boot Start End Blocks Id System /dev/sda1 * 63 481949 240943+ 83 Linux /dev/sda2 481950 2910640634 1455079342+ fd Linux raid autodetect /dev/sda3 2910640635 2930272064 9815715 82 Linux swap / Solaris Disk /dev/sdb: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00069785 Device Boot Start End Blocks Id System /dev/sdb1 63 2910158684 1455079311 fd Linux raid autodetect /dev/sdb2 2910158685 2930272064 10056690 82 Linux swap / Solaris Disk /dev/sdc: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdc1 63 2910158684 1455079311 fd Linux raid autodetect /dev/sdc2 2910158685 2930272064 10056690 82 Linux swap / Solaris Disk /dev/sdd: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000f14de Device Boot Start End Blocks Id System /dev/sdd1 63 2910158684 1455079311 fd Linux raid autodetect /dev/sdd2 2910158685 2930272064 10056690 82 Linux swap / Solaris The first disk (/dev/sda) contains the /boot partition on /dev/sda1. I use grub2 to boot the system off this partition. On top of this raid10 I installed two volume groups, one for /, one for /home. This system worked well, I even exchanged two disks during the last two years. It always worked. But not this time. For the first time, /dev/sda broke. I do not know if this is an issue – I know I would have struggled anyways to overcome the problem with /boot installed on that disk and grub2 installed on the mbr of /dev/sda. Anyways, I did what I always did: start knoppix fire up the raid sudo mdadm --examine -scan which returns ARRAY /dev/md127 UUID=0dbf4558:1a943464:132783e8:19cdff95 start it up sudo mdadm --assemble /dev/md127 fail the failing disk (smart event) sudo mdadm /dev/md127 --fail /dev/sda2 remove the failing disk sudo mdadm /dev/md127 --remove /dev/sda2 stop the raid sudo mdadm -S /dev/md127 take out the disk replace it with a new one create the same partitions as on the failling one add it to the raid sudo mdadm --assemble /dev/md127 sudo mdadm /dev/md127 --add /dev/sda2 wait 4 hours All looks fine: cat /proc/mdstat returns: Personalities : [raid10] md127 : active raid10 sda2[0] sdd1[3] sdc1[2] sdb1[1] 2910158464 blocks 64K chunks 2 near-copies [4/4] [UUUU] unused devices: <none> and sudo mdadm --detail /dev/md127 returns /dev/md127: Version : 0.90 Creation Time : Wed Jun 10 13:08:46 2009 Raid Level : raid10 Array Size : 2910158464 (2775.34 GiB 2980.00 GB) Used Dev Size : 1455079232 (1387.67 GiB 1490.00 GB) Raid Devices : 4 Total Devices : 4 Preferred Minor : 127 Persistence : Superblock is persistent Update Time : Thu Mar 21 16:27:40 2013 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : near=2 Chunk Size : 64K UUID : 0dbf4558:1a943464:132783e8:19cdff95 (local to host Microknoppix) Events : 0.4824680 Number Major Minor RaidDevice State 0 8 2 0 active sync /dev/sda2 1 8 17 1 active sync /dev/sdb1 2 8 33 2 active sync /dev/sdc1 3 8 49 3 active sync /dev/sdd1 However, there is no trace of the volume groups. Rebooting into knoppix does not help Restarting the old system (I actually replugged and re-added the failing disk for that – the system begins to start, but then fails to see the / partition – no wonder if the volume group is gone) does not help. sudo vgscan, sudo vgdisplay, sudo lvs, sudo lvdisplay, sudo vgscan –mknodes all returned No volume groups found. I am completely at a loss. Can anyone tell me if and how I can recover my data? Thanks in advance!

    Read the article

  • NHibernate Pitfalls: Custom Types and Detecting Changes

    - by Ricardo Peres
    This is part of a series of posts about NHibernate Pitfalls. See the entire collection here. NHibernate supports the declaration of properties of user-defined types, that is, not entities, collections or primitive types. These are used for mapping a database columns, of any type, into a different type, which may not even be an entity; think, for example, of a custom user type that converts a BLOB column into an Image. User types must implement interface NHibernate.UserTypes.IUserType. This interface specifies an Equals method that is used for comparing two instances of the user type. If this method returns false, the entity is marked as dirty, and, when the session is flushed, will trigger an UPDATE. So, in your custom user type, you must implement this carefully so that it is not mistakenly considered changed. For example, you can cache the original column value inside of it, and compare it with the one in the other instance. Let’s see an example implementation of a custom user type that converts a Byte[] from a BLOB column into an Image: 1: [Serializable] 2: public sealed class ImageUserType : IUserType 3: { 4: private Byte[] data = null; 5: 6: public ImageUserType() 7: { 8: this.ImageFormat = ImageFormat.Png; 9: } 10: 11: public ImageFormat ImageFormat 12: { 13: get; 14: set; 15: } 16: 17: public Boolean IsMutable 18: { 19: get 20: { 21: return (true); 22: } 23: } 24: 25: public Object Assemble(Object cached, Object owner) 26: { 27: return (cached); 28: } 29: 30: public Object DeepCopy(Object value) 31: { 32: return (value); 33: } 34: 35: public Object Disassemble(Object value) 36: { 37: return (value); 38: } 39: 40: public new Boolean Equals(Object x, Object y) 41: { 42: return (Object.Equals(x, y)); 43: } 44: 45: public Int32 GetHashCode(Object x) 46: { 47: return ((x != null) ? x.GetHashCode() : 0); 48: } 49: 50: public override Int32 GetHashCode() 51: { 52: return ((this.data != null) ? this.data.GetHashCode() : 0); 53: } 54: 55: public override Boolean Equals(Object obj) 56: { 57: ImageUserType other = obj as ImageUserType; 58: 59: if (other == null) 60: { 61: return (false); 62: } 63: 64: if (Object.ReferenceEquals(this, other) == true) 65: { 66: return (true); 67: } 68: 69: return (this.data.SequenceEqual(other.data)); 70: } 71: 72: public Object NullSafeGet(IDataReader rs, String[] names, Object owner) 73: { 74: Int32 index = rs.GetOrdinal(names[0]); 75: Byte[] data = rs.GetValue(index) as Byte[]; 76: 77: this.data = data as Byte[]; 78: 79: if (data == null) 80: { 81: return (null); 82: } 83: 84: using (MemoryStream stream = new MemoryStream(this.data ?? new Byte[0])) 85: { 86: return (Image.FromStream(stream)); 87: } 88: } 89: 90: public void NullSafeSet(IDbCommand cmd, Object value, Int32 index) 91: { 92: if (value != null) 93: { 94: Image data = value as Image; 95: 96: using (MemoryStream stream = new MemoryStream()) 97: { 98: data.Save(stream, this.ImageFormat); 99: value = stream.ToArray(); 100: } 101: } 102: 103: (cmd.Parameters[index] as DbParameter).Value = value ?? DBNull.Value; 104: } 105: 106: public Object Replace(Object original, Object target, Object owner) 107: { 108: return (original); 109: } 110: 111: public Type ReturnedType 112: { 113: get 114: { 115: return (typeof(Image)); 116: } 117: } 118: 119: public SqlType[] SqlTypes 120: { 121: get 122: { 123: return (new SqlType[] { new SqlType(DbType.Binary) }); 124: } 125: } 126: } In this case, we need to cache the original Byte[] data because it’s not easy to compare two Image instances, unless, of course, they are the same.

    Read the article

  • Problem with boundary collision

    - by James Century
    The problem: When the player hits the left boundary he stops (this is exactly what I want), when he hits the right boundary. He continues until his rectangle's left boundary meets with the right boundary. Outcome: https://www.youtube.com/watch?v=yuJfIWZ_LL0&feature=youtu.be My Code public class Player extends GameObject{ BufferedImageLoader loader; Texture tex = Game.getInstance(); BufferedImage image; Animation playerWalkLeft; private HealthBarManager healthBar; private String username; private int width; private ManaBarManager manaBar; public Player(float x, float y, ObjectID ID) { super(x, y, ID, null); loader = new BufferedImageLoader(); playerWalkLeft = new Animation(5,tex.player[10],tex.player[11],tex.player[12],tex.player[13],tex.player[14],tex.player[15],tex.player[17],tex.player[18]); } public void tick(LinkedList<GameObject> object) { setX(getX()+velX); setY(getY()+velY); playerWalkLeft.runAnimation(); } public void render(Graphics g) { g.setColor(Color.BLACK); FontMetrics fm = g.getFontMetrics(g.getFont()); if(username != null) width = fm.stringWidth(username); if(username != null){ g.drawString(username,(int) x-width/2+15,(int) y); } if(velX != 0){ playerWalkLeft.drawAnimation(g, (int)x, (int)y); }else{ g.drawImage(tex.player[16], (int)x, (int)y, null); } g.setColor(Color.PINK); g.drawRect((int)x,(int)y,33,48); g.drawRect(0,0,(int)Game.getWalkableBounds().getWidth(), (int)Game.getWalkableBounds().getHeight()); } @SuppressWarnings("unused") private Image getCurrentImage() { return image; } public float getX() { return x; } public float getY() { return y; } public void setX(float x) { Rectangle gameBoundry = Game.getWalkableBounds(); if(x >= gameBoundry.getMinX() && x <= gameBoundry.getMaxX()){ this.x = x; } } public void setY(float y) { //IGNORE THE SetY please. this.y = y; } public float getVelX() { return velX; } public void setHealthBar(HealthBarManager healthBar){ this.healthBar = healthBar; } public HealthBarManager getHealthBar(){ return healthBar; } public float getVelY() { return velY; } public void setVelX(float velX) { this.velX = velX; } public void setVelY(float velY) { this.velY = velY; } public ObjectID getID() { return ID; } public void setUsername(String playerName) { this.username = playerName; } public String getUsername(){ return this.username; } public void setManaBar(ManaBarManager manaBar) { this.manaBar = manaBar; } public ManaBarManager getManaBar(){ return manaBar; } public int getLevel(){ return 1; } public boolean isPlayerInsideBoundry(float x, float y){ Rectangle boundry = Game.getWalkableBounds(); if(boundry.contains(x,y)){ return true; } return false; } } What I've tried: - Using a method that checks if the game boundary contains player boundary rectangle. This gave me the same result as what the check statement in my setX did.

    Read the article

  • Faster Memory Allocation Using vmtasks

    - by Steve Sistare
    You may have noticed a new system process called "vmtasks" on Solaris 11 systems: % pgrep vmtasks 8 % prstat -p 8 PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP 8 root 0K 0K sleep 99 -20 9:10:59 0.0% vmtasks/32 What is vmtasks, and why should you care? In a nutshell, vmtasks accelerates creation, locking, and destruction of pages in shared memory segments. This is particularly helpful for locked memory, as creating a page of physical memory is much more expensive than creating a page of virtual memory. For example, an ISM segment (shmflag & SHM_SHARE_MMU) is locked in memory on the first shmat() call, and a DISM segment (shmflg & SHM_PAGEABLE) is locked using mlock() or memcntl(). Segment operations such as creation and locking are typically single threaded, performed by the thread making the system call. In many applications, the size of a shared memory segment is a large fraction of total physical memory, and the single-threaded initialization is a scalability bottleneck which increases application startup time. To break the bottleneck, we apply parallel processing, harnessing the power of the additional CPUs that are always present on modern platforms. For sufficiently large segments, as many of 16 threads of vmtasks are employed to assist an application thread during creation, locking, and destruction operations. The segment is implicitly divided at page boundaries, and each thread is given a chunk of pages to process. The per-page processing time can vary, so for dynamic load balancing, the number of chunks is greater than the number of threads, and threads grab chunks dynamically as they finish their work. Because the threads modify a single application address space in compressed time interval, contention on locks protecting VM data structures locks was a problem, and we had to re-scale a number of VM locks to get good parallel efficiency. The vmtasks process has 1 thread per CPU and may accelerate multiple segment operations simultaneously, but each operation gets at most 16 helper threads to avoid monopolizing CPU resources. We may reconsider this limit in the future. Acceleration using vmtasks is enabled out of the box, with no tuning required, and works for all Solaris platform architectures (SPARC sun4u, SPARC sun4v, x86). The following tables show the time to create + lock + destroy a large segment, normalized as milliseconds per gigabyte, before and after the introduction of vmtasks: ISM system ncpu before after speedup ------ ---- ------ ----- ------- x4600 32 1386 245 6X X7560 64 1016 153 7X M9000 512 1196 206 6X T5240 128 2506 234 11X T4-2 128 1197 107 11x DISM system ncpu before after speedup ------ ---- ------ ----- ------- x4600 32 1582 265 6X X7560 64 1116 158 7X M9000 512 1165 152 8X T5240 128 2796 198 14X (I am missing the data for T4 DISM, for no good reason; it works fine). The following table separates the creation and destruction times: ISM, T4-2 before after ------ ----- create 702 64 destroy 495 43 To put this in perspective, consider creating a 512 GB ISM segment on T4-2. Creating the segment would take 6 minutes with the old code, and only 33 seconds with the new. If this is your Oracle SGA, you save over 5 minutes when starting the database, and you also save when shutting it down prior to a restart. Those minutes go directly to your bottom line for service availability.

    Read the article

  • cannot install firmware-b43-installer

    - by unknown
    output i get when installing the installArchives() failed: Preconfiguring packages ... Preconfiguring packages ... Preconfiguring packages ... Selecting previously deselected package menu. (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 216125 files and directories currently installed.) Unpacking menu (from .../menu_2.1.44ubuntu1_i386.deb) ... Selecting previously deselected package wifi-radar. Unpacking wifi-radar (from .../wifi-radar_2.0.s05-1.2_all.deb) ... Processing triggers for man-db ... Processing triggers for install-info ... Processing triggers for doc-base ... Processing 1 added doc-base file(s)... Registering documents with scrollkeeper... Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for desktop-file-utils ... Processing triggers for python-gmenu ... Rebuilding /usr/share/applications/desktop.en_US.UTF8.cache... Processing triggers for python-support ... Setting up firmware-b43-installer (4.150.10.5-5) ... --2012-10-26 08:51:30-- http://mirror2.openwrt.org/sources/broadcom-wl-4.150.10.5.tar.bz2 Resolving mirror2.openwrt.org... 46.4.11.11 Connecting to mirror2.openwrt.org|46.4.11.11|:80... failed: Connection refused. dpkg: error processing firmware-b43-installer (--configure): subprocess installed post-installation script returned error exit status 4 No apport report written because MaxReports is reached already Setting up menu (2.1.44ubuntu1) ... Processing triggers for menu ... Setting up wifi-radar (2.0.s05-1.2) ... Processing triggers for menu ... Errors were encountered while processing: firmware-b43-installer Setting up firmware-b43-installer (4.150.10.5-5) ... --2012-10-26 08:51:33-- http://mirror2.openwrt.org/sources/broadcom-wl-4.150.10.5.tar.bz2 Resolving mirror2.openwrt.org... 46.4.11.11 Connecting to mirror2.openwrt.org|46.4.11.11|:80... failed: Connection refused. dpkg: error processing firmware-b43-installer (--configure): subprocess installed post-installation script returned error exit status 4 same thing occurs when i try to install any of the wireless application. All other software installs and the same error when trying to install firmware. I tried to go the link(http://mirror2.openwrt.org/sources/broadcom-wl-4.150.10.5.tar.bz2) and download the package but found no make file found in his package. please help me.

    Read the article

< Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >