Search Results

Search found 28590 results on 1144 pages for 'best of'.

Page 266/1144 | < Previous Page | 262 263 264 265 266 267 268 269 270 271 272 273  | Next Page >

  • Android: OutOfMemoryError while uploading video - how best to chunk?

    - by AP257
    Hi all, I have the same problem as described here, but I will supply a few more details. While trying to upload a video in Android, I'm reading it into memory, and if the video is large I get an OutOfMemoryError. Here's my code: // get bytestream to upload videoByteArray = getBytesFromFile(cR, fileUriString); public static byte[] getBytesFromFile(ContentResolver cR, String fileUriString) throws IOException { Uri tempuri = Uri.parse(fileUriString); InputStream is = cR.openInputStream(tempuri); byte[] b3 = readBytes(is); is.close(); return b3; } public static byte[] readBytes(InputStream inputStream) throws IOException { ByteArrayOutputStream byteBuffer = new ByteArrayOutputStream(); // this is storage overwritten on each iteration with bytes int bufferSize = 1024; byte[] buffer = new byte[bufferSize]; int len = 0; while ((len = inputStream.read(buffer)) != -1) { byteBuffer.write(buffer, 0, len); } return byteBuffer.toByteArray(); } And here's the traceback (the error is thrown on the byteBuffer.write(buffer, 0, len) line): 04-08 11:56:20.456: ERROR/dalvikvm-heap(6088): Out of memory on a 16775184-byte allocation. 04-08 11:56:20.456: INFO/dalvikvm(6088): "IntentService[UploadService]" prio=5 tid=17 RUNNABLE 04-08 11:56:20.456: INFO/dalvikvm(6088): | group="main" sCount=0 dsCount=0 s=N obj=0x449a3cf0 self=0x38d410 04-08 11:56:20.456: INFO/dalvikvm(6088): | sysTid=6119 nice=0 sched=0/0 cgrp=default handle=4010416 04-08 11:56:20.456: INFO/dalvikvm(6088): at java.io.ByteArrayOutputStream.expand(ByteArrayOutputStream.java:~93) 04-08 11:56:20.456: INFO/dalvikvm(6088): at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:218) 04-08 11:56:20.456: INFO/dalvikvm(6088): at com.android.election2010.UploadService.readBytes(UploadService.java:199) 04-08 11:56:20.456: INFO/dalvikvm(6088): at com.android.election2010.UploadService.getBytesFromFile(UploadService.java:182) 04-08 11:56:20.456: INFO/dalvikvm(6088): at com.android.election2010.UploadService.doUploadinBackground(UploadService.java:118) 04-08 11:56:20.456: INFO/dalvikvm(6088): at com.android.election2010.UploadService.onHandleIntent(UploadService.java:85) 04-08 11:56:20.456: INFO/dalvikvm(6088): at android.app.IntentService$ServiceHandler.handleMessage(IntentService.java:30) 04-08 11:56:20.456: INFO/dalvikvm(6088): at android.os.Handler.dispatchMessage(Handler.java:99) 04-08 11:56:20.456: INFO/dalvikvm(6088): at android.os.Looper.loop(Looper.java:123) 04-08 11:56:20.456: INFO/dalvikvm(6088): at android.os.HandlerThread.run(HandlerThread.java:60) 04-08 11:56:20.467: WARN/dalvikvm(6088): threadid=17: thread exiting with uncaught exception (group=0x4001b180) 04-08 11:56:20.467: ERROR/AndroidRuntime(6088): Uncaught handler: thread IntentService[UploadService] exiting due to uncaught exception 04-08 11:56:20.467: ERROR/AndroidRuntime(6088): java.lang.OutOfMemoryError 04-08 11:56:20.467: ERROR/AndroidRuntime(6088): at java.io.ByteArrayOutputStream.expand(ByteArrayOutputStream.java:93) 04-08 11:56:20.467: ERROR/AndroidRuntime(6088): at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:218) 04-08 11:56:20.467: ERROR/AndroidRuntime(6088): at com.android.election2010.UploadService.readBytes(UploadService.java:199) 04-08 11:56:20.467: ERROR/AndroidRuntime(6088): at com.android.election2010.UploadService.getBytesFromFile(UploadService.java:182) 04-08 11:56:20.467: ERROR/AndroidRuntime(6088): at com.android.election2010.UploadService.doUploadinBackground(UploadService.java:118) 04-08 11:56:20.467: ERROR/AndroidRuntime(6088): at com.android.election2010.UploadService.onHandleIntent(UploadService.java:85) 04-08 11:56:20.467: ERROR/AndroidRuntime(6088): at android.app.IntentService$ServiceHandler.handleMessage(IntentService.java:30) 04-08 11:56:20.467: ERROR/AndroidRuntime(6088): at android.os.Handler.dispatchMessage(Handler.java:99) 04-08 11:56:20.467: ERROR/AndroidRuntime(6088): at android.os.Looper.loop(Looper.java:123) 04-08 11:56:20.467: ERROR/AndroidRuntime(6088): at android.os.HandlerThread.run(HandlerThread.java:60) 04-08 11:56:20.496: INFO/Process(4657): Sending signal. PID: 6088 SIG: 3 I guess that as @DroidIn suggests, I need to upload it in chunks. But (newbie question alert) does that mean that I should make multiple PostMethod requests, and glue the file together at the server end? Or can I load the bytestream into memory in chunks, and glue it together in the Android code? If anyone could give me a clue as to the best approach, I would be very grateful.

    Read the article

  • What is the best way to solve an Objective-C namespace collision?

    - by Mecki
    Objective-C has no namespaces; it's much like C, everything is within one global namespace. Common practice is to prefix classes with initials, e.g. if you are working at IBM, you could prefix them with "IBM"; if you work for Microsoft, you could use "MS"; and so on. Sometimes the initials refer to the project, e.g. Adium prefixes classes with "AI" (as there is no company behind it of that you could take the initials). Apple prefixes classes with NS and says this prefix is reserved for Apple only. So far so well. But appending 2 to 4 letters to a class name in front is a very, very limited namespace. E.g. MS or AI could have an entirely different meanings (AI could be Artificial Intelligence for example) and some other developer might decide to use them and create an equally named class. Bang, namespace collision. Okay, if this is a collision between one of your own classes and one of an external framework you are using, you can easily change the naming of your class, no big deal. But what if you use two external frameworks, both frameworks that you don't have the source to and that you can't change? Your application links with both of them and you get name conflicts. How would you go about solving these? What is the best way to work around them in such a way that you can still use both classes? In C you can work around these by not linking directly to the library, instead you load the library at runtime, using dlopen(), then find the symbol you are looking for using dlsym() and assign it to a global symbol (that you can name any way you like) and then access it through this global symbol. E.g. if you have a conflict because some C library has a function named open(), you could define a variable named myOpen and have it point to the open() function of the library, thus when you want to use the system open(), you just use open() and when you want to use the other one, you access it via the myOpen identifier. Is something similar possible in Objective-C and if not, is there any other clever, tricky solution you can use resolve namespace conflicts? Any ideas? Update: Just to clarify this: answers that suggest how to avoid namespace collisions in advance or how to create a better namespace are certainly welcome; however, I will not accept them as the answer since they don't solve my problem. I have two libraries and their class names collide. I can't change them; I don't have the source of either one. The collision is already there and tips on how it could have been avoided in advance won't help anymore. I can forward them to the developers of these frameworks and hope they choose a better namespace in the future, but for the time being I'm searching a solution to work with the frameworks right now within a single application. Any solutions to make this possible?

    Read the article

  • What's the best way to refactor this Rails controller?

    - by Robert DiNicolas
    I'd like some advice on how to best refactor this controller. The controller builds a page of zones and modules. Page has_many zones, zone has_many modules. So zones are just a cluster of modules wrapped in a container. The problem I'm having is that some modules may have some specific queries that I don't want executed on every page, so I've had to add conditions. The conditions just test if the module is on the page, if it is the query is executed. One of the problems with this is if I add a hundred special module queries, the controller has to iterate through each one. I think I would like to see these module condition moved out of the controller as well as all the additional custom actions. I can keep everything in this one controller, but I plan to have many apps using this controller so it could get messy. class PagesController < ApplicationController # GET /pages/1 # GET /pages/1.xml # Show is the main page rendering action, page routes are aliased in routes.rb def show #-+-+-+-+-Core Page Queries-+-+-+-+- @page = Page.find(params[:id]) @zones = @page.zones.find(:all, :order => 'zones.list_order ASC') @mods = @page.mods.find(:all) @columns = Page.columns # restful params to influence page rendering, see routes.rb @fragment = params[:fragment] # render single module @cluster = params[:cluster] # render single zone @head = params[:head] # render html, body and head #-+-+-+-+-Page Level Json Conversions-+-+-+-+- @metas = @page.metas ? ActiveSupport::JSON.decode(@page.metas) : nil @javascripts = @page.javascripts ? ActiveSupport::JSON.decode(@page.javascripts) : nil #-+-+-+-+-Module Specific Queries-+-+-+-+- # would like to refactor this process @mods.each do |mod| # Reps Module Custom Queries if mod.name == "reps" @reps = User.find(:all, :joins => :roles, :conditions => { :roles => { :name => 'rep' } }) end # Listing-poc Module Custom Queries if mod.name == "listing-poc" limit = params[:limit].to_i < 1 ? 10 : params[:limit] PropertyEntry.update_from_listing(mod.service_url) @properties = PropertyEntry.all(:limit => limit, :order => "city desc") end # Talents-index Module Custom Queries if mod.name == "talents-index" @talent = params[:type] @reps = User.find(:all, :joins => :talents, :conditions => { :talents => { :name => @talent } }) end end respond_to do |format| format.html # show.html.erb format.xml { render :xml => @page.to_xml( :include => { :zones => { :include => :mods } } ) } format.json { render :json => @page.to_json } format.css # show.css.erb, CSS dependency manager template end end # for property listing ajax request def update_properties limit = params[:limit].to_i < 1 ? 10 : params[:limit] offset = params[:offset] @properties = PropertyEntry.all(:limit => limit, :offset => offset, :order => "city desc") #render :nothing => true end end So imagine a site with a hundred modules and scores of additional controller actions. I think most would agree that it would be much cleaner if I could move that code out and refactor it to behave more like a configuration.

    Read the article

  • How best to modernize the 2002-era J2EE app?

    - by user331465
    I have this friend.... I have this friend who works on a java ee application (j2ee) application started in the early 2000's. Currently they add a feature here and there, but have a large codebase. Over the years the team has shrunk by 70%. [Yes, the "i have this friend is". It's me, attempting to humorously inject teenage high-school counselor shame into the mix] Java, Vintage 2002 The application uses EJB 2.1, struts 1.x, DAO's etc with straight jdbc calls (mixture of stored procedures and prepared statements). No ORM. For caching they use a mixture of OpenSymphony OSCache and a home-grown cache layer. Over the last few years, they have spent effort to modernize the UI using ajax techniques and libraries. This largely involves javascript libaries (jquery, yui, etc). Client Side On the client side, the lack of upgrade path from struts1 to struts2 discouraged them from migrating to struts2. Other web frameworks became popular (wicket, spring , jsf). Struts2 was not the "clear winner". Migrating all the existing UI from Struts1 to Struts2/wicket/etc did not seem to present much marginal benefit at a very high cost. They did not want to have a patchwork of technologies-du-jour (subsystem X in Struts2, subsystem Y in Wicket, etc.) so developer write new features using Struts 1. Server Side On the server side, they looked into moving to ejb 3, but never had a big impetus. The developers are all comfortable with ejb-jar.xml, EJBHome, EJBRemote, that "ejb 2.1 as is" represented the path of least resistance. One big complaint about the ejb environment: programmers still pretend "ejb server runs in separate jvm than servlet engine". No app server (jboss/weblogic) has ever enforced this separation. The team has never deployed the ejb server on a separate box then the app server. The ear file contains multiple copies of the same jar file; one for the 'web layer' (foo.war/WEB-INF/lib) and one for the server side (foo.ear/). The app server only loads one jar. The duplications makes for ambiguity. Caching As for caching, they use several cache implementations: OpenSymphony cache and a homegrown cache. Jgroups provides clustering support Now What? The question: The team currently has spare cycles to to invest in modernizing the application? Where would the smart investor spend them? The main criteria: 1) productivity gains. Specifically reducing the time to develope new subsystems features and reduced maintenance. 2) performance/scalability. They do not care about fashion or techno-du-jour street cred. What do you all recommend? On the persistence side Switch everything (or new development only) to JPA/JPA2? Straight hibernate? Wait for Java EE 6? On the client/web-framework side: Migrate (some or all) to struts2? wicket? jsf/jsf2? As for caching: terracotta? ehcache? coherence? stick with what they have? how best to take advantage of the huge heap sizes that the 64-bit jvms offer? Thanks in advance.

    Read the article

  • How to best propagate changes upwards a hierarchical structure for binding?

    - by H.B.
    If i have a folder-like structure that uses the composite design pattern and i bind the root folder to a TreeView. It would be quite useful if i can display certain properties that are being accumulated from the folder's contents. The question is, how do i best inform the folder that changes occurred in a child-element so that the accumulative properties get updated? The context in which i need this is a small RSS-FeedReader i am trying to make. This are the most important objects and aspects of my model: Composite interface: public interface IFeedComposite : INotifyPropertyChanged { string Title { get; set; } int UnreadFeedItemsCount { get; } ObservableCollection<FeedItem> FeedItems { get; } } FeedComposite (aka Folder) public class FeedComposite : BindableObject, IFeedComposite { private string title = ""; public string Title { get { return title; } set { title = value; NotifyPropertyChanged("Title"); } } private ObservableCollection<IFeedComposite> children = new ObservableCollection<IFeedComposite>(); public ObservableCollection<IFeedComposite> Children { get { return children; } set { children.Clear(); foreach (IFeedComposite item in value) { children.Add(item); } NotifyPropertyChanged("Children"); } } public FeedComposite() { } public FeedComposite(string title) { Title = title; } public ObservableCollection<FeedItem> FeedItems { get { ObservableCollection<FeedItem> feedItems = new ObservableCollection<FeedItem>(); foreach (IFeedComposite child in Children) { foreach (FeedItem item in child.FeedItems) { feedItems.Add(item); } } return feedItems; } } public int UnreadFeedItemsCount { get { return (from i in FeedItems where i.IsUnread select i).Count(); } } Feed: public class Feed : BindableObject, IFeedComposite { private string url = ""; public string Url { get { return url; } set { url = value; NotifyPropertyChanged("Url"); } } ... private ObservableCollection<FeedItem> feedItems = new ObservableCollection<FeedItem>(); public ObservableCollection<FeedItem> FeedItems { get { return feedItems; } set { feedItems.Clear(); foreach (FeedItem item in value) { AddFeedItem(item); } NotifyPropertyChanged("Items"); } } public int UnreadFeedItemsCount { get { return (from i in FeedItems where i.IsUnread select i).Count(); } } public Feed() { } public Feed(string url) { Url = url; } Ok, so here's the thing, if i bind a TextBlock.Text to the UnreadFeedItemsCount there won't be simple notifications when an item is marked unread, so one of my approaches has been to handle the PropertyChanged event of every FeedItem and if the IsUnread-Property is changed i have my Feed make a notification that the property UnreadFeedItemsCount has been changed. With this approach i also need to handle all PropertyChanged events of all Feeds and FeedComposites in Children of FeedComposite, from the sound of it, it should be obvious that this is not such a very good idea, you need to be very careful that items never get added or removed to any collection without having attached the PropertyChanged event handler first and things like that. Also: What do i do with the CollectionChanged-Events which necessarily also cause a change in the sum of the unread items count? Sounds like more event handling fun. It is such a mess, it would be great if anyone has an elegant solution to this since i don't want the feed-reader to end up as awful as my first attempt years ago when i didn't even know about DataBinding...

    Read the article

  • What is the best way to slide a panel in WPF?

    - by Kris Erickson
    I have a fairly simple UserControl that I have made (pardon my Xaml I am just learning WPF) and I want to slide the off the screen. To do so I am animating a translate transform (I also tried making the Panel the child of a canvas and animating the X position with the same results), but the panel moves very jerkily, even on a fairly fast new computer. What is the best way to slide in and out (preferably with KeySplines so that it moves with inertia) without getting the jerkyness. I only have 8 buttons on the panel, so I didn't think it would be too much of a problem. Here is the Xaml I am using, it runs fine in Kaxaml, but it is very jerky and slow (as well as being jerkly and slow when run compiled in a WPF app). <UserControl xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Width="1002" Height="578"> <UserControl.Resources> <Style TargetType="Button"> <Setter Property="Control.Padding" Value="4"/> <Setter Property="Control.Margin" Value="10"/> <Setter Property="Control.Template"> <Setter.Value> <ControlTemplate TargetType="Button"> <Grid Name="backgroundGrid" Width="210" Height="210" Background="#00FFFFFF"> <Grid.BitmapEffect> <BitmapEffectGroup> <DropShadowBitmapEffect x:Name="buttonDropShadow" ShadowDepth="2"/> <OuterGlowBitmapEffect x:Name="buttonGlow" GlowColor="#A0FEDF00" GlowSize="0"/> </BitmapEffectGroup> </Grid.BitmapEffect> <Border x:Name="background" Margin="1,1,1,1" CornerRadius="15"> <Border.Background> <LinearGradientBrush StartPoint="0,0" EndPoint="0,1"> <LinearGradientBrush.GradientStops> <GradientStop Offset="0" Color="#FF0062B6"/> <GradientStop Offset="1" Color="#FF0089FE"/> </LinearGradientBrush.GradientStops> </LinearGradientBrush> </Border.Background> </Border> <Border Margin="1,1,1,0" BorderBrush="#FF000000" BorderThickness="1.5" CornerRadius="15"/> <ContentPresenter HorizontalAlignment="Center" Margin="{TemplateBinding Control.Padding}" VerticalAlignment="Center" Content="{TemplateBinding ContentControl.Content}" ContentTemplate="{TemplateBinding ContentControl.ContentTemplate}"/> </Grid> </ControlTemplate> </Setter.Value> </Setter> </Style> </UserControl.Resources> <Canvas> <Grid x:Name="Panel1" Height="578" Canvas.Left="0" Canvas.Top="0"> <Grid.RenderTransform> <TransformGroup> <TranslateTransform x:Name="panelTranslate" X="0" Y="0"/> </TransformGroup> </Grid.RenderTransform> <Grid.RowDefinitions> <RowDefinition Height="287"/> <RowDefinition Height="287"/> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition x:Name="Panel1Col1"/> <ColumnDefinition x:Name="Panel1Col2"/> <ColumnDefinition x:Name="Panel1Col3"/> <ColumnDefinition x:Name="Panel1Col4"/> <!-- Set width to 0 to hide a column--> </Grid.ColumnDefinitions> <Button x:Name="Panel1Product1" Grid.Column="0" Grid.Row="0" HorizontalAlignment="Center" VerticalAlignment="Center"> <Button.Triggers> <EventTrigger RoutedEvent="Button.Click" SourceName="Panel1Product1"> <EventTrigger.Actions> <BeginStoryboard> <Storyboard> <DoubleAnimation BeginTime="00:00:00.6" Duration="0:0:3" From="0" Storyboard.TargetName="panelTranslate" Storyboard.TargetProperty="X" To="-1000"/> </Storyboard> </BeginStoryboard> </EventTrigger.Actions> </EventTrigger> </Button.Triggers> </Button> <Button x:Name="Panel1Product2" Grid.Column="0" Grid.Row="1" HorizontalAlignment="Center" VerticalAlignment="Center"/> <Button x:Name="Panel1Product3" Grid.Column="1" Grid.Row="0" HorizontalAlignment="Center" VerticalAlignment="Center"/> <Button x:Name="Panel1Product4" Grid.Column="1" Grid.Row="1" HorizontalAlignment="Center" VerticalAlignment="Center"/> <Button x:Name="Panel1Product5" Grid.Column="2" Grid.Row="0" HorizontalAlignment="Center" VerticalAlignment="Center"/> <Button x:Name="Panel1Product6" Grid.Column="2" Grid.Row="1" HorizontalAlignment="Center" VerticalAlignment="Center"/> <Button x:Name="Panel1Product7" Grid.Column="3" Grid.Row="0" HorizontalAlignment="Center" VerticalAlignment="Center"/> <Button x:Name="Panel1Product8" Grid.Column="3" Grid.Row="1" HorizontalAlignment="Center" VerticalAlignment="Center"/> </Grid> </Canvas> </UserControl>

    Read the article

  • What's the best name for a non-mutating "add" method on an immutable collection?

    - by Jon Skeet
    Sorry for the waffly title - if I could come up with a concise title, I wouldn't have to ask the question. Suppose I have an immutable list type. It has an operation Foo(x) which returns a new immutable list with the specified argument as an extra element at the end. So to build up a list of strings with values "Hello", "immutable", "world" you could write: var empty = new ImmutableList<string>(); var list1 = empty.Foo("Hello"); var list2 = list1.Foo("immutable"); var list3 = list2.Foo("word"); (This is C# code, and I'm most interested in a C# suggestion if you feel the language is important. It's not fundamentally a language question, but the idioms of the language may be important.) The important thing is that the existing lists are not altered by Foo - so empty.Count would still return 0. Another (more idiomatic) way of getting to the end result would be: var list = new ImmutableList<string>().Foo("Hello"); .Foo("immutable"); .Foo("word"); My question is: what's the best name for Foo? EDIT 3: As I reveal later on, the name of the type might not actually be ImmutableList<T>, which makes the position clear. Imagine instead that it's TestSuite and that it's immutable because the whole of the framework it's a part of is immutable... (End of edit 3) Options I've come up with so far: Add: common in .NET, but implies mutation of the original list Cons: I believe this is the normal name in functional languages, but meaningless to those without experience in such languages Plus: my favourite so far, it doesn't imply mutation to me. Apparently this is also used in Haskell but with slightly different expectations (a Haskell programmer might expect it to add two lists together rather than adding a single value to the other list). With: consistent with some other immutable conventions, but doesn't have quite the same "additionness" to it IMO. And: not very descriptive. Operator overload for + : I really don't like this much; I generally think operators should only be applied to lower level types. I'm willing to be persuaded though! The criteria I'm using for choosing are: Gives the correct impression of the result of the method call (i.e. that it's the original list with an extra element) Makes it as clear as possible that it doesn't mutate the existing list Sounds reasonable when chained together as in the second example above Please ask for more details if I'm not making myself clear enough... EDIT 1: Here's my reasoning for preferring Plus to Add. Consider these two lines of code: list.Add(foo); list.Plus(foo); In my view (and this is a personal thing) the latter is clearly buggy - it's like writing "x + 5;" as a statement on its own. The first line looks like it's okay, until you remember that it's immutable. In fact, the way that the plus operator on its own doesn't mutate its operands is another reason why Plus is my favourite. Without the slight ickiness of operator overloading, it still gives the same connotations, which include (for me) not mutating the operands (or method target in this case). EDIT 2: Reasons for not liking Add. Various answers are effectively: "Go with Add. That's what DateTime does, and String has Replace methods etc which don't make the immutability obvious." I agree - there's precedence here. However, I've seen plenty of people call DateTime.Add or String.Replace and expect mutation. There are loads of newsgroup questions (and probably SO ones if I dig around) which are answered by "You're ignoring the return value of String.Replace; strings are immutable, a new string gets returned." Now, I should reveal a subtlety to the question - the type might not actually be an immutable list, but a different immutable type. In particular, I'm working on a benchmarking framework where you add tests to a suite, and that creates a new suite. It might be obvious that: var list = new ImmutableList<string>(); list.Add("foo"); isn't going to accomplish anything, but it becomes a lot murkier when you change it to: var suite = new TestSuite<string, int>(); suite.Add(x => x.Length); That looks like it should be okay. Whereas this, to me, makes the mistake clearer: var suite = new TestSuite<string, int>(); suite.Plus(x => x.Length); That's just begging to be: var suite = new TestSuite<string, int>().Plus(x => x.Length); Ideally, I would like my users not to have to be told that the test suite is immutable. I want them to fall into the pit of success. This may not be possible, but I'd like to try. I apologise for over-simplifying the original question by talking only about an immutable list type. Not all collections are quite as self-descriptive as ImmutableList<T> :)

    Read the article

  • Big smart ViewModels, dumb Views, and any model, the best MVVM approach?

    - by Edward Tanguay
    The following code is a refactoring of my previous MVVM approach (Fat Models, skinny ViewModels and dumb Views, the best MVVM approach?) in which I moved the logic and INotifyPropertyChanged implementation from the model back up into the ViewModel. This makes more sense, since as was pointed out, you often you have to use models that you either can't change or don't want to change and so your MVVM approach should be able to work with any model class as it happens to exist. This example still allows you to view the live data from your model in design mode in Visual Studio and Expression Blend which I think is significant since you could have a mock data store that the designer connects to which has e.g. the smallest and largest strings that the UI can possibly encounter so that he can adjust the design based on those extremes. Questions: I'm a bit surprised that I even have to "put a timer" in my ViewModel since it seems like that is a function of INotifyPropertyChanged, it seems redundant, but it was the only way I could get the XAML UI to constantly (once per second) reflect the state of my model. So it would be interesting to hear anyone who may have taken this approach if you encountered any disadvantages down the road, e.g. with threading or performance. The following code will work if you just copy the XAML and code behind into a new WPF project. XAML: <Window x:Class="TestMvvm73892.Window1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:local="clr-namespace:TestMvvm73892" Title="Window1" Height="300" Width="300"> <Window.Resources> <ObjectDataProvider x:Key="DataSourceCustomer" ObjectType="{x:Type local:CustomerViewModel}" MethodName="GetCustomerViewModel"/> </Window.Resources> <DockPanel DataContext="{StaticResource DataSourceCustomer}"> <StackPanel DockPanel.Dock="Top" Orientation="Horizontal"> <TextBlock Text="{Binding Path=FirstName}"/> <TextBlock Text=" "/> <TextBlock Text="{Binding Path=LastName}"/> </StackPanel> <StackPanel DockPanel.Dock="Top" Orientation="Horizontal"> <TextBlock Text="{Binding Path=TimeOfMostRecentActivity}"/> </StackPanel> </DockPanel> </Window> Code Behind: using System; using System.Windows; using System.ComponentModel; using System.Threading; namespace TestMvvm73892 { public partial class Window1 : Window { public Window1() { InitializeComponent(); } } //view model public class CustomerViewModel : INotifyPropertyChanged { private string _firstName; private string _lastName; private DateTime _timeOfMostRecentActivity; private Timer _timer; public string FirstName { get { return _firstName; } set { _firstName = value; this.RaisePropertyChanged("FirstName"); } } public string LastName { get { return _lastName; } set { _lastName = value; this.RaisePropertyChanged("LastName"); } } public DateTime TimeOfMostRecentActivity { get { return _timeOfMostRecentActivity; } set { _timeOfMostRecentActivity = value; this.RaisePropertyChanged("TimeOfMostRecentActivity"); } } public CustomerViewModel() { _timer = new Timer(CheckForChangesInModel, null, 0, 1000); } private void CheckForChangesInModel(object state) { Customer currentCustomer = CustomerViewModel.GetCurrentCustomer(); MapFieldsFromModeltoViewModel(currentCustomer, this); } public static CustomerViewModel GetCustomerViewModel() { CustomerViewModel customerViewModel = new CustomerViewModel(); Customer currentCustomer = CustomerViewModel.GetCurrentCustomer(); MapFieldsFromModeltoViewModel(currentCustomer, customerViewModel); return customerViewModel; } public static void MapFieldsFromModeltoViewModel(Customer model, CustomerViewModel viewModel) { viewModel.FirstName = model.FirstName; viewModel.LastName = model.LastName; viewModel.TimeOfMostRecentActivity = model.TimeOfMostRecentActivity; } public static Customer GetCurrentCustomer() { return Customer.GetCurrentCustomer(); } //INotifyPropertyChanged implementation public event PropertyChangedEventHandler PropertyChanged; private void RaisePropertyChanged(string property) { if (PropertyChanged != null) { PropertyChanged(this, new PropertyChangedEventArgs(property)); } } } //model public class Customer { public string FirstName { get; set; } public string LastName { get; set; } public DateTime TimeOfMostRecentActivity { get; set; } public static Customer GetCurrentCustomer() { return new Customer { FirstName = "Jim", LastName = "Smith", TimeOfMostRecentActivity = DateTime.Now }; } } }

    Read the article

  • What add-in/workbench framework is the best .NET alternative to Eclipse RCP?

    - by Winston Fassett
    I'm looking for a plugin-based application framework that is comparable to the Eclipse Plugin Framework, which to my simple mind consists of: a core plugin management framework (Equinox / OSGI), which provides the ability to declare extension endpoints and then discover and load plugins that service those endpoints. (this is different than Dependency Injection, but admittedly the difference is subtle - configuration is highly de-centralized, there are versioning concerns, it might involve an online plugin repository, and most importantly to me, it should be easy for the user to add plugins without needing to know anything about the underlying architecture / config files) many layers of plugins that provide a basic workbench shell with concurrency support, commands, preference sheets, menus, toolbars, key bindings, etc. That is just scratching the surface of the RCP, which itself is meant to serve as the foundation of your application, which you build by writing / assembling even more plugins. Here's what I've gleaned from the internet in the past couple of days... As far as I can tell, there is nothing in the .NET world that remotely approaches the robustness and maturity of the Eclipse RCP for Java but there are several contenders that do either #1 or #2 pretty well. (I should also mention that I have not made a final decision on WinForms vs WPF, so I'm also trying to understand the level of UI coupling in any candidate framework. I'm also wondering about platform coupling and source code licensing) I must say that the open-source stuff is generally less-documented but easier to understand, while the MS stuff typically has more documentation but is less accessible, so that with many of the MS technologies, I'm left wondering what they actually do, in a practical sense. These are the libraries I have found: SharpDevelop The first thing I looked at was SharpDevelop, which does both #1 and also #2 in a basic way (no insult to SharpDevelop, which is admirable - I just mean more basic than Eclipse RCP). However, SharpDevelop is an application more than a framework, and there are basic assumptions and limitations there (i.e. being somewhat coupled to WinForms). Still, there are some articles on CodeProject explaining how to use it as the foundation for an application. System.Addins It appears that System.Addins is meant to provide a robust add-in loading framework, with some sophisticated options for loading assemblies with varying levels of trusts and even running the out of process. It appears to be primarily code-based, and pretty code-heavy, with lots of assemblies that serve to insulate against versioning issues., using Guidance Automation to generate a good deal of code. So far I haven't found many System.AddIns articles that illustrate how it could be used to build something like an Eclipse RCP, and many people seem to be wringing their hands about its complexity. Mono.Addins It appears that Mono.Addins was influenced by System.Addins, SharpDevelop, and MonoDevelop. It seems to provide the basics from System.Addins, with less sophisticated options for plugin loading, but more simplicity, with attribute-based registration, XML manifests, and the infrastructure for online plugin repositories. It has a pretty good FAQ and documentation, as well as a fairly robust set of examples that really help paint a picture of how to develop an architecture like that of SharpDevelop or Eclipse. The examples use GTK for UI, but the framework itself is not coupled to GTK. So it appears to do #1 (add-in loading) pretty well and points the way to #2 (workbench framework). It appears that Mono.Addins was derived from MonoDevelop, but I haven't actually looked at whether MonoDevelop provides a good core workbench framework. Managed Extensibility Framework This is what everyone's talking about at the moment, and it's slowly getting clearer what it does, but I'm still pretty fuzzy, even after reading several posts on SO. The official word is that it "can live side-by-side" with System.Addins. However, it doesn't reference it and it appears to reproduce some of its functionality. It seems to me, then, that it is a simpler, more accessible alternative to System.Addins. It appears to be more like Mono.Addins in that it provides attribute-based wiring. It provides "catalogs" that can be attribute-based or directory-based. It does not seem to provide any XML or manifest-based wiring. So far I haven't found much documentation and the examples seem to be kind of "magical" and more reminiscent of attribute-based DI, despite the clarifications that MEF is not a DI container. Its license just got opened up, but it does reference WindowsBase -- not sure if that means it's coupled to Windows. Acropolis I'm not sure what this is. Is it MEF, or something that is still coming? Composite Application Blocks There are WPF and Winforms Composite Application blocks that seem to provide much more of a workbench framework. I have very little experience with these but they appear to rely on Guidance Automation quite a bit are obviously coupled with the UI layers. There are a few examples of combining MEF with these application blocks. I've done the best I could to answer my own question here, but I'm really only scratching the surface, and I don't have experience with any of these frameworks. Hopefully some of you can add more detail about the frameworks you have experience with. It would be great if we could end up with some sort of comparison matrix.

    Read the article

  • Is Visual Source Safe (The latest Version) really that bad? Why? What's the Best Alternative? Why? [closed]

    - by hanzolo
    Over the years I've constantly heard horror stories, had people say "Real Programmers Dont Use VSS", and so on. BUT, then in the workplace I've worked at two companies, one, a very well known public facing high traffic website, and another high end Financial Services "Web-Based" hosted solution catering to some very large, very well known companies, which is where I currently Reside and everything's working just fine (KNOCK KNOCK!!). I'm constantly interfacing with EXTREMELY Old technology with some of these financial institutions.. OLD LIKE YOU WOULDN'T BELIEVE.. which leads me to the conclusion that if it works "LEAVE IT", and that maybe there's some value in old technology? at least enough value to overrule a rewrite!? right?? Is there something fundamentally flawed with the underlying technology that VSS uses? I have a feeling that if i said "someone said VSS Sucks" they would beg to differ, most likely give me this look like i dont know -ish, and I'd never gain back their respect and my credibility (well, that'll be hard to blow.. lol), BUT, give me an argument that I can take to someone whose been coding for 30 years, that builds Platforms that leverage current technology (.NET 3.5 / SQL 2008 R2 ), write's their own ORM with scaffolding and is able to provide a quality platform that supports thousands of concurrent users on a multi-tenant hosted solution, and does not agree with any benefits from having Source Control Integrated, and yet uses the Infamous Visual Source Safe. I have extensive experience with TFS up to 2010, and honestly I think it's great when a team (beyond developers) can embrace it. I've worked side by side with someone whose a die hard SVN'r and from a purist standpoint, I see the beauty in it (I need a bit more, out of my SS, but it surely suffices). So, why are such smarties not running away from Visual Source Safe? surely if it was so bad, it would've have been realized by now, and I would not be sitting here with this simple old, Check In, Check Out, Version Resistant, Label Intensive system. But here I am... I would love to drop an argument that would be the end all argument, but if it's a matter of opinion and personal experience, there seems to be too much leeway for keeping VSS. UPDATE: I guess the best case is to have the VSS supporters check other people's experiences and draw from that until we (please no) experience the breaking factor ourselves. Until then, i wont be engaging in a discussion to migrate off of VSS.. UPDATE 11-2012: So i was able to convince everyone at my work place that since MS is sun downing Visual Source Safe it might be time to migrate over to TFS. I was able to convince them and have recently upgraded our team to Visual Studio 2012 and TFS 2012. The migration was fairly painless, had to run analyze.exe which found a bunch of errors (not sure they'll ever affect the project) and then manually run the VSSConverter.exe. Again, painless, except it took 16 hours to migrate 5 years worth of everything.. and now we're on TFS.. much more integrated.. much more cooler.. so all in all, VSS served it's purpose for years without hick-up. There were no horror stories and Visual Source Save as source control worked just fine. so to all the nay sayers (me included). there's nothing wrong with using VSS. i wouldnt start a new project with it, and i would definitely consider migrating to TFS. (it's really not super difficult and a new "wizard" type converter is due out any day now so migrating should be painless). But from my experience, it worked just fine and got the job done.

    Read the article

  • Compass - Lucene Full text search. Structure and Best Practice.

    - by Rob
    Hi, I have played about with the tutorial and Compass itself for a bit now. I have just started to ramp up the use of it and have found that the performance slows drastically. I am certain that this is due to my mappings and the relationships that I have between entities and was looking for suggestions about how this should be best done. Also as a side question I wanted to know if a variable is in an @searchableComponent but is not defined as @searchable when the component object is pulled out of Compass results will you be able to access that variable? I have 3 main classes that I want to search on - Provider, Location and Activity. They are all inter-related - a Provider can have many locations and activites and has an address; A Location has 1 provider, many activities and an address; An activity has 1 provider and many locations. I have a join table between activity and Location called ActivityLocation that can later provider additional information about the relationship. My classes are mapped to compass as shown below for provider location activity and address. This works but gives a huge index and searches on it are comparatively slow, any advice would be great. Cheers, Rob @Searchable public class AbstractActivity extends LightEntity implements Serializable { /** * Serialisation ID */ private static final long serialVersionUID = 3445339493203407152L; @SearchableId (name="actID") private Integer activityId =-1; @SearchableComponent() private Provider provider; @SearchableComponent(prefix = "activity") private Category category; private String status; @SearchableProperty (name = "activityName") @SearchableMetaData (name = "activityshortactivityName") private String activityName; @SearchableProperty (name = "shortDescription") @SearchableMetaData (name = "activityshortDescription") private String shortDescription; @SearchableProperty (name = "abRating") @SearchableMetaData (name = "activityabRating") private Integer abRating; private String contactName; private String phoneNumber; private String faxNumber; private String email; private String url; @SearchableProperty (name = "completed") @SearchableMetaData (name = "activitycompleted") private Boolean completed= false; @SearchableProperty (name = "isprivate") @SearchableMetaData (name = "activityisprivate") private Boolean isprivate= false; private Boolean subs= false; private Boolean newsfeed= true; private Set news = new HashSet(0); private Set ActivitySession = new HashSet(0); private Set payments = new HashSet(0); private Set userclub = new HashSet(0); private Set ActivityOpeningTimes = new HashSet(0); private Set Events = new HashSet(0); private User creator; private Set userInterested = new HashSet(0); boolean freeEdit = false; private Integer activityType =0; @SearchableComponent (maxDepth=2) private Set activityLocations = new HashSet(0); private Double userRating = -1.00; Getters and Setters .... @Searchable public class AbstractLocation extends LightEntity implements Serializable { /** * Serialisation ID */ private static final long serialVersionUID = 3445339493203407152L; @SearchableId (name="locationID") private Integer locationId; @SearchableComponent (prefix = "location") private Category category; @SearchableComponent (maxDepth=1) private Provider provider; @SearchableProperty (name = "status") @SearchableMetaData (name = "locationstatus") private String status; @SearchableProperty private String locationName; @SearchableProperty (name = "shortDescription") @SearchableMetaData (name = "locationshortDescription") private String shortDescription; @SearchableProperty (name = "abRating") @SearchableMetaData (name = "locationabRating") private Integer abRating; private Integer boolUseProviderDetails; @SearchableProperty (name = "contactName") @SearchableMetaData (name = "locationcontactName") private String contactName; @SearchableComponent private Address address; @SearchableProperty (name = "phoneNumber") @SearchableMetaData (name = "locationphoneNumber") private String phoneNumber; @SearchableProperty (name = "faxNumber") @SearchableMetaData (name = "locationfaxNumber") private String faxNumber; @SearchableProperty (name = "email") @SearchableMetaData (name = "locationemail") private String email; @SearchableProperty (name = "url") @SearchableMetaData (name = "locationurl") private String url; @SearchableProperty (name = "completed") @SearchableMetaData (name = "locationcompleted") private Boolean completed= false; @SearchableProperty (name = "isprivate") @SearchableMetaData (name = "locationisprivate") private Boolean isprivate= false; @SearchableComponent private Set activityLocations = new HashSet(0); private Set LocationOpeningTimes = new HashSet(0); private Set events = new HashSet(0); @SearchableProperty (name = "adult_cost") @SearchableMetaData (name = "locationadult_cost") private String adult_cost =""; @SearchableProperty (name = "child_cost") @SearchableMetaData (name = "locationchild_cost") private String child_cost =""; @SearchableProperty (name = "family_cost") @SearchableMetaData (name = "locationfamily_cost") private String family_cost =""; private Double userRating = -1.00; private Set costs = new HashSet(0); private String cost_caveats =""; Getters and Setters .... @Searchable public class AbstractActivitylocations implements java.io.Serializable { /** * */ private static final long serialVersionUID = 1365110541466870626L; @SearchableId (name="id") private Integer id; @SearchableComponent private Activity activity; @SearchableComponent private Location location; Getters and Setters..... @Searchable public class AbstractProvider extends LightEntity implements Serializable { private static final long serialVersionUID = 3060354043429663058L; @SearchableId private Integer providerId = -1; @SearchableComponent (prefix = "provider") private Category category; @SearchableProperty (name = "businessName") @SearchableMetaData (name = "providerbusinessName") private String businessName; @SearchableProperty (name = "contactName") @SearchableMetaData (name = "providercontactName") private String contactName; @SearchableComponent private Address address; @SearchableProperty (name = "phoneNumber") @SearchableMetaData (name = "providerphoneNumber") private String phoneNumber; @SearchableProperty (name = "faxNumber") @SearchableMetaData (name = "providerfaxNumber") private String faxNumber; @SearchableProperty (name = "email") @SearchableMetaData (name = "provideremail") private String email; @SearchableProperty (name = "url") @SearchableMetaData (name = "providerurl") private String url; @SearchableProperty (name = "status") @SearchableMetaData (name = "providerstatus") private String status; @SearchableProperty (name = "notes") @SearchableMetaData (name = "providernotes") private String notes; @SearchableProperty (name = "shortDescription") @SearchableMetaData (name = "providershortDescription") private String shortDescription; private Boolean completed = false; private Boolean isprivate = false; private Double userRating = -1.00; private Integer ABRating = 1; @SearchableComponent private Set locations = new HashSet(0); @SearchableComponent private Set activities = new HashSet(0); private Set ProviderOpeningTimes = new HashSet(0); private User creator; boolean freeEdit = false; Getters and Setters... Thanks for reading!! Rob

    Read the article

  • Bulk inserting best way to about it? + Helping me understand fully what I found so far

    - by chobo2
    Hi So I saw this post here and read it and it seems like bulk copy might be the way to go. http://stackoverflow.com/questions/682015/whats-the-best-way-to-bulk-database-inserts-from-c I still have some questions and want to know how things actually work. So I found 2 tutorials. http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx First way uses 2 ado.net 2.0 features. BulkInsert and BulkCopy. the second one uses linq to sql and OpenXML. This sort of appeals to me as I am using linq to sql already and prefer it over ado.net. However as one person pointed out in the posts what he just going around the issue at the cost of performance( nothing wrong with that in my opinion) First I will talk about the 2 ways in the first tutorial I am using VS2010 Express, .net 4.0, MVC 2.0, SQl Server 2005 Is ado.net 2.0 the most current version? Based on the technology I am using, is there some updates to what I am going to show that would improve it somehow? Is there any thing that these tutorial left out that I should know about? BulkInsert I am using this table for all the examples. CREATE TABLE [dbo].[TBL_TEST_TEST] ( ID INT IDENTITY(1,1) PRIMARY KEY, [NAME] [varchar](50) ) SP Code USE [Test] GO /****** Object: StoredProcedure [dbo].[sp_BatchInsert] Script Date: 05/19/2010 15:12:47 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[sp_BatchInsert] (@Name VARCHAR(50) ) AS BEGIN INSERT INTO TBL_TEST_TEST VALUES (@Name); END C# Code /// <summary> /// Another ado.net 2.0 way that uses a stored procedure to do a bulk insert. /// Seems slower then "BatchBulkCopy" way and it crashes when you try to insert 500,000 records in one go. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchInsert() { // Get the DataTable with Rows State as RowState.Added DataTable dtInsertRows = GetDataTable(); SqlConnection connection = new SqlConnection(connectionString); SqlCommand command = new SqlCommand("sp_BatchInsert", connection); command.CommandType = CommandType.StoredProcedure; command.UpdatedRowSource = UpdateRowSource.None; // Set the Parameter with appropriate Source Column Name command.Parameters.Add("@Name", SqlDbType.VarChar, 50, dtInsertRows.Columns[0].ColumnName); SqlDataAdapter adpt = new SqlDataAdapter(); adpt.InsertCommand = command; // Specify the number of records to be Inserted/Updated in one go. Default is 1. adpt.UpdateBatchSize = 1000; connection.Open(); int recordsInserted = adpt.Update(dtInsertRows); connection.Close(); } So first thing is the batch size. Why would you set a batch size to anything but the number of records you are sending? Like I am sending 500,000 records so I did a Batch size of 500,000. Next why does it crash when I do this? If I set it to 1000 for batch size it works just fine. System.Data.SqlClient.SqlException was unhandled Message="A transport-level error has occurred when sending the request to the server. (provider: Shared Memory Provider, error: 0 - No process is on the other end of the pipe.)" Source=".Net SqlClient Data Provider" ErrorCode=-2146232060 Class=20 LineNumber=0 Number=233 Server="" State=0 StackTrace: at System.Data.Common.DbDataAdapter.UpdatedRowStatusErrors(RowUpdatedEventArgs rowUpdatedEvent, BatchCommandInfo[] batchCommands, Int32 commandCount) at System.Data.Common.DbDataAdapter.UpdatedRowStatus(RowUpdatedEventArgs rowUpdatedEvent, BatchCommandInfo[] batchCommands, Int32 commandCount) at System.Data.Common.DbDataAdapter.Update(DataRow[] dataRows, DataTableMapping tableMapping) at System.Data.Common.DbDataAdapter.UpdateFromDataTable(DataTable dataTable, DataTableMapping tableMapping) at System.Data.Common.DbDataAdapter.Update(DataTable dataTable) at TestIQueryable.Program.BatchInsert() in C:\Users\a\Downloads\TestIQueryable\TestIQueryable\TestIQueryable\Program.cs:line 124 at TestIQueryable.Program.Main(String[] args) in C:\Users\a\Downloads\TestIQueryable\TestIQueryable\TestIQueryable\Program.cs:line 16 InnerException: Time it took to insert 500,000 records with insert batch size of 1000 took "2 mins and 54 seconds" Of course this is no official time I sat there with a stop watch( I am sure there are better ways but was too lazy to look what they where) So I find that kinda slow compared to all my other ones(expect the linq to sql insert one) and I am not really sure why. Next I looked at bulkcopy /// <summary> /// An ado.net 2.0 way to mass insert records. This seems to be the fastest. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchBulkCopy() { // Get the DataTable DataTable dtInsertRows = GetDataTable(); using (SqlBulkCopy sbc = new SqlBulkCopy(connectionString, SqlBulkCopyOptions.KeepIdentity)) { sbc.DestinationTableName = "TBL_TEST_TEST"; // Number of records to be processed in one go sbc.BatchSize = 500000; // Map the Source Column from DataTabel to the Destination Columns in SQL Server 2005 Person Table // sbc.ColumnMappings.Add("ID", "ID"); sbc.ColumnMappings.Add("NAME", "NAME"); // Number of records after which client has to be notified about its status sbc.NotifyAfter = dtInsertRows.Rows.Count; // Event that gets fired when NotifyAfter number of records are processed. sbc.SqlRowsCopied += new SqlRowsCopiedEventHandler(sbc_SqlRowsCopied); // Finally write to server sbc.WriteToServer(dtInsertRows); sbc.Close(); } } This one seemed to go really fast and did not even need a SP( can you use SP with bulk copy? If you can would it be better?) BatchCopy had no problem with a 500,000 batch size.So again why make it smaller then the number of records you want to send? I found that with BatchCopy and 500,000 batch size it took only 5 seconds to complete. I then tried with a batch size of 1,000 and it only took 8 seconds. So much faster then the bulkinsert one above. Now I tried the other tutorial. USE [Test] GO /****** Object: StoredProcedure [dbo].[spTEST_InsertXMLTEST_TEST] Script Date: 05/19/2010 15:39:03 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[spTEST_InsertXMLTEST_TEST](@UpdatedProdData nText) AS DECLARE @hDoc int exec sp_xml_preparedocument @hDoc OUTPUT,@UpdatedProdData INSERT INTO TBL_TEST_TEST(NAME) SELECT XMLProdTable.NAME FROM OPENXML(@hDoc, 'ArrayOfTBL_TEST_TEST/TBL_TEST_TEST', 2) WITH ( ID Int, NAME varchar(100) ) XMLProdTable EXEC sp_xml_removedocument @hDoc C# code. /// <summary> /// This is using linq to sql to make the table objects. /// It is then serailzed to to an xml document and sent to a stored proedure /// that then does a bulk insert(I think with OpenXML) /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertXMLBatch() { using (TestDataContext db = new TestDataContext()) { TBL_TEST_TEST[] testRecords = new TBL_TEST_TEST[500000]; for (int count = 0; count < 500000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; testRecords[count] = testRecord; } StringBuilder sBuilder = new StringBuilder(); System.IO.StringWriter sWriter = new System.IO.StringWriter(sBuilder); XmlSerializer serializer = new XmlSerializer(typeof(TBL_TEST_TEST[])); serializer.Serialize(sWriter, testRecords); db.insertTestData(sBuilder.ToString()); } } So I like this because I get to use objects even though it is kinda redundant. I don't get how the SP works. Like I don't get the whole thing. I don't know if OPENXML has some batch insert under the hood but I do not even know how to take this example SP and change it to fit my tables since like I said I don't know what is going on. I also don't know what would happen if the object you have more tables in it. Like say I have a ProductName table what has a relationship to a Product table or something like that. In linq to sql you could get the product name object and make changes to the Product table in that same object. So I am not sure how to take that into account. I am not sure if I would have to do separate inserts or what. The time was pretty good for 500,000 records it took 52 seconds The last way of course was just using linq to do it all and it was pretty bad. /// <summary> /// This is using linq to sql to to insert lots of records. /// This way is slow as it uses no mass insert. /// Only tried to insert 50,000 records as I did not want to sit around till it did 500,000 records. /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertAll() { using (TestDataContext db = new TestDataContext()) { db.CommandTimeout = 600; for (int count = 0; count < 50000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; db.TBL_TEST_TESTs.InsertOnSubmit(testRecord); } db.SubmitChanges(); } } I did only 50,000 records and that took over a minute to do. So I really narrowed it done to the linq to sql bulk insert way or bulk copy. I am just not sure how to do it when you have relationship for either way. I am not sure how they both stand up when doing updates instead of inserts as I have not gotten around to try it yet. I don't think I will ever need to insert/update more than 50,000 records at one type but at the same time I know I will have to do validation on records before inserting so that will slow it down and that sort of makes linq to sql nicer as your got objects especially if your first parsing data from a xml file before you insert into the database. Full C# code using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Xml.Serialization; using System.Data; using System.Data.SqlClient; namespace TestIQueryable { class Program { private static string connectionString = ""; static void Main(string[] args) { BatchInsert(); Console.WriteLine("done"); } /// <summary> /// This is using linq to sql to to insert lots of records. /// This way is slow as it uses no mass insert. /// Only tried to insert 50,000 records as I did not want to sit around till it did 500,000 records. /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertAll() { using (TestDataContext db = new TestDataContext()) { db.CommandTimeout = 600; for (int count = 0; count < 50000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; db.TBL_TEST_TESTs.InsertOnSubmit(testRecord); } db.SubmitChanges(); } } /// <summary> /// This is using linq to sql to make the table objects. /// It is then serailzed to to an xml document and sent to a stored proedure /// that then does a bulk insert(I think with OpenXML) /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertXMLBatch() { using (TestDataContext db = new TestDataContext()) { TBL_TEST_TEST[] testRecords = new TBL_TEST_TEST[500000]; for (int count = 0; count < 500000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; testRecords[count] = testRecord; } StringBuilder sBuilder = new StringBuilder(); System.IO.StringWriter sWriter = new System.IO.StringWriter(sBuilder); XmlSerializer serializer = new XmlSerializer(typeof(TBL_TEST_TEST[])); serializer.Serialize(sWriter, testRecords); db.insertTestData(sBuilder.ToString()); } } /// <summary> /// An ado.net 2.0 way to mass insert records. This seems to be the fastest. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchBulkCopy() { // Get the DataTable DataTable dtInsertRows = GetDataTable(); using (SqlBulkCopy sbc = new SqlBulkCopy(connectionString, SqlBulkCopyOptions.KeepIdentity)) { sbc.DestinationTableName = "TBL_TEST_TEST"; // Number of records to be processed in one go sbc.BatchSize = 500000; // Map the Source Column from DataTabel to the Destination Columns in SQL Server 2005 Person Table // sbc.ColumnMappings.Add("ID", "ID"); sbc.ColumnMappings.Add("NAME", "NAME"); // Number of records after which client has to be notified about its status sbc.NotifyAfter = dtInsertRows.Rows.Count; // Event that gets fired when NotifyAfter number of records are processed. sbc.SqlRowsCopied += new SqlRowsCopiedEventHandler(sbc_SqlRowsCopied); // Finally write to server sbc.WriteToServer(dtInsertRows); sbc.Close(); } } /// <summary> /// Another ado.net 2.0 way that uses a stored procedure to do a bulk insert. /// Seems slower then "BatchBulkCopy" way and it crashes when you try to insert 500,000 records in one go. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchInsert() { // Get the DataTable with Rows State as RowState.Added DataTable dtInsertRows = GetDataTable(); SqlConnection connection = new SqlConnection(connectionString); SqlCommand command = new SqlCommand("sp_BatchInsert", connection); command.CommandType = CommandType.StoredProcedure; command.UpdatedRowSource = UpdateRowSource.None; // Set the Parameter with appropriate Source Column Name command.Parameters.Add("@Name", SqlDbType.VarChar, 50, dtInsertRows.Columns[0].ColumnName); SqlDataAdapter adpt = new SqlDataAdapter(); adpt.InsertCommand = command; // Specify the number of records to be Inserted/Updated in one go. Default is 1. adpt.UpdateBatchSize = 500000; connection.Open(); int recordsInserted = adpt.Update(dtInsertRows); connection.Close(); } private static DataTable GetDataTable() { // You First need a DataTable and have all the insert values in it DataTable dtInsertRows = new DataTable(); dtInsertRows.Columns.Add("NAME"); for (int i = 0; i < 500000; i++) { DataRow drInsertRow = dtInsertRows.NewRow(); string name = "Name : " + i; drInsertRow["NAME"] = name; dtInsertRows.Rows.Add(drInsertRow); } return dtInsertRows; } static void sbc_SqlRowsCopied(object sender, SqlRowsCopiedEventArgs e) { Console.WriteLine("Number of records affected : " + e.RowsCopied.ToString()); } } }

    Read the article

  • Open Source Survey: Oracle Products on Top

    - by trond-arne.undheim
    Oracle continues to work with the open source community to bring the most innovative and productive software to market (more). Oracle products received the most votes in several key categories of the 2010 Linux Journal Reader's Choice Awards. With over 12,000 technologists reporting, these product earned top spots: Best Office Suite: OpenOffice.org Best Single Office Program: OpenOffice.org Writer Best Database: MySQL Best Virtualization Solution: VirtualBox "As the leading open source technology and service provider, Oracle continues to work with the community stakeholders to rapidly innovate many open source products for use in fully tested production environments," says Edward Screven, Oracle's chief corporate architect. "Supporting open source is important to Oracle and our customers, and we continue to invest in it." According to a recent report by the Linux Foundation, Oracle is one of the top ten contributors to the Linux Kernel. Oracle also contributes millions of lines of code to these important projects: OpenJDK: 7,002,579 Eclipse: 1,800,000 (#3 in active committers) MySQL: 5,073,113 NetBeans: 7,870,446 JSF: 701,980 Apache MyFaces Trinidad: 1,316,840 Hudson: 1,209,779 OpenOffice.org: 7,500,000

    Read the article

  • Code Camp 2011 – Summary

    - by hajan
    Waiting whole twelve months to come this year’s Code Camp 2011 event was something which all Microsoft technologies (and even non-Microsoft techs.) developers were doing in the past year. Last year’s success was enough big to be heard and to influence everything around our developer community and beyond. Code Camp 2011 was nothing else but a invincible success which will remain in our memory for a long time from now. Darko Milevski (president of MKDOT.NET UG and SharePoint MVP) said something interesting at the event keynote that up to now we were looking at the past by saying what we did… now we will focus on the future and how to develop our community more and more in the future days, weeks, months and I hope so for many years… Even though it was held only two days ago (26th of November 2011), I already feel the nostalgia for everything that happened there and for the excellent time we have spent all together. ORGANIZED BY ENTHUSIASTS AND EXPERTS Code Camp 2011 was organized by number of community enthusiasts and experts who have unselfishly contributed with all their free time to make the best of this event. The event was organized by a known community group called MKDOT.NET User Group, name of a user group which is known not only in Macedonia, but also in many countries abroad. Organization mainly consists of software developers, technical leaders, team leaders in several known companies in Macedonia, as well as Microsoft MVPs. SPEAKERS There were 24 speakers at five parallel tracks. At Code Camp 2011 we had two groups of speakers: Professional Experts in various technologies and Student Speakers. The new interesting thing here is the Student Speakers, which draw attention a lot, especially to other students who were interested to see what their colleagues are going to speak about and how do they use Microsoft technologies in different coding scenarios and practices, in different topics. From the rest of the professional speakers, there were 7 Microsoft MVPs: Two ASP.NET/IIS MVPs, Two C# MVPs, and One MVP in SharePoint, SQL Server and Exchange Server. I must say that besides the MVP Speakers, who definitely did a great job as always… there were other excellent speakers as well, which were speaking on various technologies, such as: Web Development, Windows Phone Development, XNA, Windows 8, Games Development, Entity Framework, Event-driven programming, SOLID, SQLCLR, T-SQL, e.t.c. SESSIONS There were 25 sessions mainly all related to Microsoft technologies, but ranging from Windows 8, WP7, ASP.NET till Games Development, XNA and Event-driven programming. Sessions were going in five parallel tracks named as Red, Yellow, Green, Blue and Student track. Five presentations in each track, each with level 300 or 400. More info MY SESSION (ASP.NET MVC Best Practices) I must say that from the big number of speaking engagements I have had, this was one of my best performances and definitely I have set new records of attendees at my sessions and probably overall. I spoke on topic ASP.NET MVC Best Practices, where I have shown tips, tricks, guidelines and best practices on what to use and what to avoid by developing with one of the best web development frameworks nowadays, ASP.NET MVC. I had approximately 350+ attendees, the hall was full so that there was no room for staying at feet. Besides .NET developers, there were a lot of other technology oriented developers, who has also received the presentation very well and I really hope I gave them reason to think about ASP.NET as one of the best options for web development nowadays (if you ask me, it’s the best one ;-)). I have included 10 tips in using ASP.NET MVC each of them followed by a demo. Besides these 10 tips, I have briefly introduced the concept of ASP.NET MVC for those that haven’t been working with the framework and at the end some bonus tips. I must say there was lot of laugh for some funny sentences I have stated, like “If you code ASP.NET MVC, girls will love you more” – same goes for girls, only replace girls with boys :). [LINK TO SESSION WILL GO HERE, ONCE SESSIONS ARE AVAILABLE ON MK CODECAMP WEBSITE] VOLUNTEERS Without strong organization, such events wouldn’t be able to gather hundreds of attendees at one place and still stay perfectly organized to the smallest details, without dedicated organization and volunteers. I would like to dedicate this space in my blog to them and to say one big THANK YOU for supporting us before the event and during the whole day in the event. With such young and dedicated volunteers, we couldn’t achieve anything but great results. THANK YOU EVERYONE FOR YOUR CONTRIBUTION! NETWORKING One of the main reasons why we do such events is to gather all professionals in one place. Networking is what everyone wants because through this way of networking, we can meet incredible people in one place. It is amazing feeling to share your knowledge with others and exchange thoughts on various topics. Meet and talk to interesting people. I have had very special moments with many attendees especially after my presentation. Special Thank You to all of them who come to meet me in person, whether to ask a question, say congrats for my session or simply meet me and just smile :)… everything counts! Thank You! TWITTER During the event, twitter was one of the most useful event-wide communication tool where everyone could tweet with hash tag #mkcodecamp or #mkdotnet and say what he/she wants to say about the current state and happenings at that moment… In my next blog post I will list the top craziest tweets that were posted at this event… FUTURE OF MKDOT.NET Having such strong community around MKDOT.NET, the future seems very bright. The initial plans are to have sub-groups in several technologies, however all these sub-groups will belong to the MKDOT.NET UG which will be, somehow, the HEAD of these sub-groups. We are doing this to provide better divisions by technologies and organize ourselves better since our community is very big, around 500 members in MKDOT.NET.We will have five sub-groups:- Web User Group (Lead:Hajan Selmani - me)- Mobile User Group (Lead: Filip Kerazovski)- Visual C# User Group (Lead: Vekoslav Stefanovski)- SharePoint User Group (Lead: Darko Milevski)- Dynamics User Group (Lead: Vladimir Senih) SUMMARY Online registered attendees: ~1.200 Event attendees: ~800 Number of members in organization: 40+ Organized by: MKDOT.NET User Group Number of tracks: 5 Number of speakers: 24 Number of sessions: 25 Event official website: http://codecamp.mkdot.net Total number of sponsors: 20 Platinum Sponsors: Microsoft, INETA, Telerik Place held: FON University City and Country: Skopje, Macedonia THANK YOU FOR BEING PART OF THE BEST EVENT IN MACEDONIA, CODE CAMP 2011. Regards, Hajan

    Read the article

  • C#: String Concatenation vs Format vs StringBuilder

    - by James Michael Hare
    I was looking through my groups’ C# coding standards the other day and there were a couple of legacy items in there that caught my eye.  They had been passed down from committee to committee so many times that no one even thought to second guess and try them for a long time.  It’s yet another example of how micro-optimizations can often get the best of us and cause us to write code that is not as maintainable as it could be for the sake of squeezing an extra ounce of performance out of our software. So the two standards in question were these, in paraphrase: Prefer StringBuilder or string.Format() to string concatenation. Prefer string.Equals() with case-insensitive option to string.ToUpper().Equals(). Now some of you may already know what my results are going to show, as these items have been compared before on many blogs, but I think it’s always worth repeating and trying these yourself.  So let’s dig in. The first test was a pretty standard one.  When concattenating strings, what is the best choice: StringBuilder, string concattenation, or string.Format()? So before we being I read in a number of iterations from the console and a length of each string to generate.  Then I generate that many random strings of the given length and an array to hold the results.  Why am I so keen to keep the results?  Because I want to be able to snapshot the memory and don’t want garbage collection to collect the strings, hence the array to keep hold of them.  I also didn’t want the random strings to be part of the allocation, so I pre-allocate them and the array up front before the snapshot.  So in the code snippets below: num – Number of iterations. strings – Array of randomly generated strings. results – Array to hold the results of the concatenation tests. timer – A System.Diagnostics.Stopwatch() instance to time code execution. start – Beginning memory size. stop – Ending memory size. after – Memory size after final GC. So first, let’s look at the concatenation loop: 1: // build num strings using concattenation. 2: for (int i = 0; i < num; i++) 3: { 4: results[i] = "This is test #" + i + " with a result of " + strings[i]; 5: } Pretty standard, right?  Next for string.Format(): 1: // build strings using string.Format() 2: for (int i = 0; i < num; i++) 3: { 4: results[i] = string.Format("This is test #{0} with a result of {1}", i, strings[i]); 5: }   Finally, StringBuilder: 1: // build strings using StringBuilder 2: for (int i = 0; i < num; i++) 3: { 4: var builder = new StringBuilder(); 5: builder.Append("This is test #"); 6: builder.Append(i); 7: builder.Append(" with a result of "); 8: builder.Append(strings[i]); 9: results[i] = builder.ToString(); 10: } So I take each of these loops, and time them by using a block like this: 1: // get the total amount of memory used, true tells it to run GC first. 2: start = System.GC.GetTotalMemory(true); 3:  4: // restart the timer 5: timer.Reset(); 6: timer.Start(); 7:  8: // *** code to time and measure goes here. *** 9:  10: // get the current amount of memory, stop the timer, then get memory after GC. 11: stop = System.GC.GetTotalMemory(false); 12: timer.Stop(); 13: other = System.GC.GetTotalMemory(true); So let’s look at what happens when I run each of these blocks through the timer and memory check at 500,000 iterations: 1: Operator + - Time: 547, Memory: 56104540/55595960 - 500000 2: string.Format() - Time: 749, Memory: 57295812/55595960 - 500000 3: StringBuilder - Time: 608, Memory: 55312888/55595960 – 500000   Egad!  string.Format brings up the rear and + triumphs, well, at least in terms of speed.  The concat burns more memory than StringBuilder but less than string.Format().  This shows two main things: StringBuilder is not always the panacea many think it is. The difference between any of the three is miniscule! The second point is extremely important!  You will often here people who will grasp at results and say, “look, operator + is 10% faster than StringBuilder so always use StringBuilder.”  Statements like this are a disservice and often misleading.  For example, if I had a good guess at what the size of the string would be, I could have preallocated my StringBuffer like so:   1: for (int i = 0; i < num; i++) 2: { 3: // pre-declare StringBuilder to have 100 char buffer. 4: var builder = new StringBuilder(100); 5: builder.Append("This is test #"); 6: builder.Append(i); 7: builder.Append(" with a result of "); 8: builder.Append(strings[i]); 9: results[i] = builder.ToString(); 10: }   Now let’s look at the times: 1: Operator + - Time: 551, Memory: 56104412/55595960 - 500000 2: string.Format() - Time: 753, Memory: 57296484/55595960 - 500000 3: StringBuilder - Time: 525, Memory: 59779156/55595960 - 500000   Whoa!  All of the sudden StringBuilder is back on top again!  But notice, it takes more memory now.  This makes perfect sense if you examine the IL behind the scenes.  Whenever you do a string concat (+) in your code, it examines the lengths of the arguments and creates a StringBuilder behind the scenes of the appropriate size for you. But even IF we know the approximate size of our StringBuilder, look how much less readable it is!  That’s why I feel you should always take into account both readability and performance.  After all, consider all these timings are over 500,000 iterations.   That’s at best  0.0004 ms difference per call which is neglidgable at best.  The key is to pick the best tool for the job.  What do I mean?  Consider these awesome words of wisdom: Concatenate (+) is best at concatenating.  StringBuilder is best when you need to building. Format is best at formatting. Totally Earth-shattering, right!  But if you consider it carefully, it actually has a lot of beauty in it’s simplicity.  Remember, there is no magic bullet.  If one of these always beat the others we’d only have one and not three choices. The fact is, the concattenation operator (+) has been optimized for speed and looks the cleanest for joining together a known set of strings in the simplest manner possible. StringBuilder, on the other hand, excels when you need to build a string of inderterminant length.  Use it in those times when you are looping till you hit a stop condition and building a result and it won’t steer you wrong. String.Format seems to be the looser from the stats, but consider which of these is more readable.  Yes, ignore the fact that you could do this with ToString() on a DateTime.  1: // build a date via concatenation 2: var date1 = (month < 10 ? string.Empty : "0") + month + '/' 3: + (day < 10 ? string.Empty : "0") + '/' + year; 4:  5: // build a date via string builder 6: var builder = new StringBuilder(10); 7: if (month < 10) builder.Append('0'); 8: builder.Append(month); 9: builder.Append('/'); 10: if (day < 10) builder.Append('0'); 11: builder.Append(day); 12: builder.Append('/'); 13: builder.Append(year); 14: var date2 = builder.ToString(); 15:  16: // build a date via string.Format 17: var date3 = string.Format("{0:00}/{1:00}/{2:0000}", month, day, year); 18:  So the strength in string.Format is that it makes constructing a formatted string easy to read.  Yes, it’s slower, but look at how much more elegant it is to do zero-padding and anything else string.Format does. So my lesson is, don’t look for the silver bullet!  Choose the best tool.  Micro-optimization almost always bites you in the end because you’re sacrificing readability for performance, which is almost exactly the wrong choice 90% of the time. I love the rules of optimization.  They’ve been stated before in many forms, but here’s how I always remember them: For Beginners: Do not optimize. For Experts: Do not optimize yet. It’s so true.  Most of the time on today’s modern hardware, a micro-second optimization at the sake of readability will net you nothing because it won’t be your bottleneck.  Code for readability, choose the best tool for the job which will usually be the most readable and maintainable as well.  Then, and only then, if you need that extra performance boost after profiling your code and exhausting all other options… then you can start to think about optimizing.

    Read the article

  • Solaris 11 Resources for System Administrators

    - by rickramsey
    Have too much to worry about? Let us lighten the load. OTN's job is to filter through all the available resources and take you straight to the content that will help you do your job. For starters ... Oracle Solaris 11 Documentation Library Rock-solid instructions and background from the best tech writers in the business. Includes: Getting Started (including What's New and Release Notes) Installing and Updating (includes info about IPS) Administration Guide Security Guide Working With the Desktop Developing Applications for Solaris 11 Reference Manuals Important Information from Previous Releases Related Information Legal Notes Oracle Solaris 11 Training Oracle University offers training and certification for sysadmins at all levels. If you're familiar with Oracle Solaris 10, these courses are the best way to become familiar with Solaris 11: What's New in Oracle Solaris 11 (self-study) Transition to Solaris 11 - classroom and virtual Solaris 11 Administration - classroom and virtual Solaris 11 Advanced Administration - classroom and virtual These are the education paths for Oracle Certifications on Solaris 11: Oracle Certified Associate Oracle Certified Professional Courses for Solaris System, Network, and Security Administration - scroll to bottom of page for Solaris courses Indexes and Feeds of Our Best How-To Articles We update these indexes and feeds only after we read through the available content and select the best. These are our personal recommendations by topic, product, or audience. We'll be adding content about Oracle Solaris 11 in the coming days and weeks. Keep an eye out. All Systems Indexes Solaris 11 Collection All System Feeds OTN Systems Community Home Page Our Home Page is the same as the front page of a newspaper, but without the advertising. Latest articles, latest useful content from the community, plus links to all the other resources available on OTN. ... And If You Want to Be The First To Know After we select the best content, the first thing we do is hang out at the OTN Garage and talk about it.  Every once in a while we talk about cool cars and motorcycles, too: On Facebook On Twitter On Our Blog - Rick Ramsey Website Newsletter Facebook Twitter

    Read the article

  • Oracle OpenWorld Series: Fusion Middleware for Enterprise Applications

    - by Michelle Kimihira
    Continuing from Tuesday’s Oracle OpenWorld Series: Best of Oracle Fusion Middleware blog, let’s focus on what Middleware sessions to attend if you are an Enterprise Applications customer. This Focus On document provides you a roadmap of must-attend sessions and demos. We also have a great line up of customers participating in Fusion Middleware sessions targeted to Enterprise Applications customers. Here are a few: Monday, 10/1 1:45 PM – 2:45 PM Intuit and Qualcomm join  CON9470 – Best Practices for Realizing Greater Returns from Oracle Fusion Middleware Projects Moscone West, 3003 Wednesday, 10/3 10:15 AM – 11:15 AM Cognizant joins CON9506 – Oracle Exalogic: The Best Choice for Running Your Applications Moscone West, 3003 Wednesday, 10/3 5:00 PM – 6:00 PM Ingersoll Rand joins CON9047 – Efficiently Scaling Oracle E-Business Suite on Oracle Exadata and Oracle Exalogic Moscone West, 2016 Thursday, 10/4 12:45 PM – 1:45 PM Allegis Group joins CON9048 – Harness the Power of Oracle Exadata and Oracle Exalogic with PeopleSoft Applications Moscone West, 3011 Additional Information ·         Relevant Blogs: Oracle OpenWorld Countdown Begins ,  Best of Oracle Fusion Middleware, Oracle OpenWorld Blog ·         Focus On Docs: Best of Oracle Fusion Middleware, Fusion Middleware for Enterprise Applications ·         Product Information on Oracle.com: Oracle Fusion Middleware ·         Subscribe to our regular Fusion Middleware Newsletter ·         Follow us on Twitter and Facebook

    Read the article

  • SQLAuthority News – My Evaluation of Singapore SharePoint Conference

    - by pinaldave
    Earlier this year I presented at SQLAuthority News – Presenting at South East Asia SharePoint Conference – Oct 26, 27, 2010 – Singapore. I felt very good to be presenting at Singapore as SharePoint Conference as I was the only SQL Speaker at the event. The event was filled with SharePoint enthusiasts and lots of other experts from all around the globe. The event was one of the best organized event, I attended in subcontinent. I just received my feedback score of the event. I was very much surprised and stunned. My ratting are very high as well, my demo were considered as one of the best demo of the whole event. I am not sure how much feedback I can share with community as organizer did not specify to me but I am very certain that I am allowed to share my own feedback. Speaker – 4.39 (best score 4.74, average 3.84) Contents – 4.37 (best score 4.39, average 3.65) Demo – 4.48 (best score, 4.48, average 3.61) I am very glad that all of my efforts to do presentations at SharePoint Conference are finally paid up. I was very much worried earlier if attendees will accept me or not as I am coming as speaker from foreign technology and no one knows me there. I must thank all of you for the same. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SharePoint, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, T SQL, Technology Tagged: SharePoint

    Read the article

  • Oracle Fusion HCM Gains Traction and Customer Recognition

    - by Scott Ewart
    Oracle Fusion HCM Gains Traction and Customer Recognition at the HRO Summit Europe in Barcelona Audience voted Oracle Fusion Human Capital Management as best in Most Reliable, Most Innovative and Best in Class. During the annual European HRO Summit in Barcelona, HRO buyers, service providers, third party advisors and other attendees were visibly impressed with the Fusion HCM product stack. Following the “present-off” among four technology vendors, Oracle was voted first in the following categories: Which technology could best suit the needs for your company Which technology do you think came across as the most reliable Which technology offers the most innovation Based on what you heard today, which technology presentation would you rate as best in class Oracle was voted second in the two other remaining categories Click here for the full article ==> http://bit.ly/sxC3tX

    Read the article

  • Frequency to submit sitemap to search engines

    - by user577691
    i have went live with my site and being new to search engines and SEO fields not sure what should be the best way to handle sitemap.xml. I have created sitemap.xml and submitted it to Google using webmaster tool Yahoo/Bing using Bing Webmater Ask.com now since site will get updated every 2-3 times per week i am not sure what should be the best approach. Do i need to submit sitemap.xml again If i need to submit sitemap.xml again and again what should be the frequency to submit that Please suggest the best approach

    Read the article

  • Google analytics campaign advice

    - by Drewsdesign
    I am buying traffic from a broker not one source and sending to various landing pages. I would like to know the best way to structure a campaign so I can find which referrering site/url is performing the best (time on site, bounce etc) Should the utm_campaign be the 'brokername' and the utm_source be the 'landingpagename' or should this be the other way around? Also what would be the best way to create a custom report to show all the referrers metrics by each landing page ? Thank guys really appreciate any help on this.

    Read the article

< Previous Page | 262 263 264 265 266 267 268 269 270 271 272 273  | Next Page >