Search Results

Search found 3680 results on 148 pages for 'dazed n confused'.

Page 145/148 | < Previous Page | 141 142 143 144 145 146 147 148  | Next Page >

  • Are all <canvas> tag dimensions in pixels?

    - by Simon Omega
    Are all tag dimensions in pixels? I am asking because I understood them to be. But my math is broken or I am just not grasping something here. I have been doing python mostly and just jumped back into Java Scripting. If I am just doing something stupid let me know. For a game I am writing, I wanted to have a blocky gradient. I have the following: HTML <canvas id="heir"></canvas> CSS @media screen { body { font-size: 12pt } /* Game Rendering Space */ canvas { width: 640px; height: 480px; border-style: solid; border-width: 1px; } } JavaScript (Shortened) function testDraw ( thecontext ) { var myblue = 255; thecontext.save(); // Save All Settings (Before this Function was called) for (var i = 0; i < 480; i = i + 10 ) { if (myblue.toString(16).length == 1) { thecontext.fillStyle = "#00000" + myblue.toString(16); } else { thecontext.fillStyle = "#0000" + myblue.toString(16); } thecontext.fillRect(0, i, 640, 10); myblue = myblue - 2; }; thecontext.restore(); // Restore Settings to Save Point (Removing Styles, etc...) } function main () { var targetcontext = document.getElementById(“main”).getContext("2d"); testDraw(targetcontext); } To me this should produce a series of 640w by 10h pixel bars. In Google Chrome and Fire Fox I get 15 bars. To me that means ( 480 / 15 ) is 32 pixel high bars. So I change the code to: function testDraw ( thecontext ) { var myblue = 255; thecontext.save(); // Save All Settings (Before this Function was called) for (var i = 0; i < 16; i++ ) { if (myblue.toString(16).length == 1) { thecontext.fillStyle = "#00000" + myblue.toString(16); } else { thecontext.fillStyle = "#0000" + myblue.toString(16); } thecontext.fillRect(0, (i * 10), 640, 10); myblue = myblue - 10; }; thecontext.restore(); // Restore Settings to Save Point (Removing Styles, etc...) } And get a true 32 pixel height result for comparison. Other than the fact that the first code snippet has shades of blue rendering in non-visible portions of the they are measuring 32 pixels. Now back to the Original Java Code... If I inspect the tag in Chrome it reports 640 x 480. If I inspect it in Fire Fox it reports 640 x 480. BUT! Fire Fox exports the original code to png at 300 x 150 (which is 15 rows of 10). Is it some how being resized to 640 x 480 by the CSS instead of being set to a true 640 x 480? Why, how, what? O_o I confused...

    Read the article

  • Is there a potential for resource leak/double free here?

    - by nhed
    The following sample (not compiled so I won't vouch for syntax) pulls two resources from resource pools (not allocated with new), then "binds" them together with MyClass for the duration of a certain transaction. The transaction, implemented here by myFunc, attempts to protect against leakage of these resources by tracking their "ownership". The local resource pointers are cleared when its obvious that instantiation of MyClass was successful. The local catch, as well as the destructor ~MyClass return the resources to their pool (double-frees are protected by teh above mentioned clearing of the local pointers). Instantiation of MyClass can fail and result in an exception at two steps (1) actual memory allocation, or (2) at the constructor body itself. I do not have a problem with #1, but in the case of #2, if the exception is thrown AFTER m_resA & m_resB were set. Causing both the ~MyClass and the cleanup code of myFunc to assume responsibility for returning these resources to their pools. Is this a reasonable concern? Options I have considered, but didn't like: Smart pointers (like boost's shared_ptr). I didn't see how to apply to a resource pool (aside for wrapping in yet another instance). Allowing double-free to occur at this level but protecting at the resource pools. Trying to use the exception type - trying to deduce that if bad_alloc was caught that MyClass did not take ownership. This will require a try-catch in the constructor to make sure that any allocation failures in ABC() ...more code here... wont be confused with failures to allocate MyClass. Is there a clean, simple solution that I have overlooked? class SomeExtResourceA; class SomeExtResourceB; class MyClass { private: // These resources come out of a resource pool not allocated with "new" for each use by MyClass SomeResourceA* m_resA; SomeResourceB* m_resB; public: MyClass(SomeResourceA* resA, SomeResourceB* resB): m_resA(resA), m_resB(resB) { ABC(); // ... more code here, could throw exceptions } ~MyClass(){ if(m_resA){ m_resA->Release(); } if(m_resB){ m_resB->Release(); } } }; void myFunc(void) { SomeResourceA* resA = NULL; SomeResourceB* resB = NULL; MyClass* pMyInst = NULL; try { resA = g_pPoolA->Allocate(); resB = g_pPoolB->Allocate(); pMyInst = new MyClass(resA,resB); resA=NULL; // ''ownership succesfully transfered to pMyInst resB=NULL; // ''ownership succesfully transfered to pMyInst // Do some work with pMyInst; ...; delete pMyInst; } catch (...) { // cleanup // need to check if resA, or resB were allocated prior // to construction of pMyInst. if(resA) resA->Release(); if(resB) resB->Release(); delete pMyInst; throw; // rethrow caught exception } }

    Read the article

  • jquery drag and drop script and problem in reading json array

    - by Mac Taylor
    i made a script , exactly like wordpress widgets page and u can drag and drop objects this is my jquery script : <script type="text/javascript" >$(function(){ $('.widget') .each(function(){ $(this).hover(function(){ $(this).find('h4').addClass('collapse'); }, function(){ $(this).find('h4').removeClass('collapse'); }) .find('h4').hover(function(){ $(this).find('.in-widget-title').css('visibility', 'visible'); }, function(){ $(this).find('.in-widget-title').css('visibility', 'hidden'); }) .click(function(){ $(this).siblings('.widget-inside').toggle(); //Save state on change of collapse state of panel updateWidgetData(); }) .end() .find('.in-widget-title').css('visibility', 'hidden'); }); $('.column').sortable({ connectWith: '.column', handle: 'h4', cursor: 'move', placeholder: 'placeholder', forcePlaceholderSize: true, opacity: 0.4, start: function(event, ui){ //Firefox, Safari/Chrome fire click event after drag is complete, fix for that if($.browser.mozilla || $.browser.safari) $(ui.item).find('.widget-inside').toggle(); }, stop: function(event, ui){ ui.item.css({'top':'0','left':'0'}); //Opera fix if(!$.browser.mozilla && !$.browser.safari) updateWidgetData(); } }) .disableSelection(); }); function updateWidgetData(){ var items=[]; $('.column').each(function(){ var columnId=$(this).attr('id'); $('.widget', this).each(function(i){ var collapsed=0; if($(this).find('.widget-inside').css('display')=="none") collapsed=1; //Create Item object for current panel var item={ id: $(this).attr('id'), collapsed: collapsed, order : i, column: columnId }; //Push item object into items array items.push(item); }); }); //Assign items array to sortorder JSON variable var sortorder={ items: items }; //Pass sortorder variable to server using ajax to save state $.post("blocks.php"+"&order="+$.toJSON(sortorder), function(data){ $('#console').html(data).fadeIn("slow"); }); } </script> main part is saving object orders in table and this is my php part : function stripslashes_deep($value) { $value = is_array($value) ? array_map('stripslashes_deep', $value) : stripslashes($value); return $value; } $order = $_GET['order']; $order = sql_quote($order); if(empty($order)){ echo "Invalid data"; exit; } global $db,$prefix; if (get_magic_quotes_gpc()) { $_POST = array_map('stripslashes_deep', $_POST); $_GET = array_map('stripslashes_deep', $_GET); $_COOKIE = array_map('stripslashes_deep', $_COOKIE); $_REQUEST = array_map('stripslashes_deep', $_REQUEST); } $data=json_decode($order); foreach($newdata->items as $item) { //Extract column number for panel $col_id=preg_replace('/[^\d\s]/', '', $item->column); //Extract id of the panel $widget_id=preg_replace('/[^\d\s]/', '', $item->id); $sql="UPDATE blocks_tbl SET bposition='$col_id', weight='".$item->order."' WHERE id='".$widget_id."'"; mysql_query($sql) or die('Error updating widget DB'); } print_r($order); now forexample the output is this : items\":[{\"id\":\"item26\",\"collapsed\":1,\"order\":0,\"column\":\"c\"},{\"id\":\"item0\",\"collapsed\":1,\"order\":0,\"column\":\"i\"},{\"id\":\"item0\",\"collapsed\":1,\"order\":1,\"column\":\"i\"},{\"id\":\"item1\",\"collapsed\":1,\"order\":2,\"column\":\"i\"},{\"id\":\"item3\",\"collapsed\":1,\"order\":3,\"column\":\"i\"},{\"id\":\"item16\",\"collapsed\":1,\"order\":4,\"column\":\"i\"},{\"id\":\"item0\",\"collapsed\":1,\"order\":5,\"column\":\"i\"},{\"id\":\"item6\",\"collapsed\":1,\"order\":6,\"column\":\"i\"},{\"id\":\"item17\",\"collapsed\":1,\"order\":7,\"column\":\"i\"},{\"id\":\"item19\",\"collapsed\":1,\"order\":8,\"column\":\"i\"},{\"id\":\"item10\",\"collapsed\":1,\"order\":9,\"column\":\"i\"},{\"id\":\"item11\",\"collapsed\":1,\"order\":10,\"column\":\"i\"},{\"id\":\"item0\",\"collapsed\":1,\"order\":0,\"column\":\"l\"},{\"id\":\"item5\",\"collapsed\":1,\"order\":1,\"column\":\"l\"},{\"id\":\"item8\",\"collapsed\":1,\"order\":2,\"column\":\"l\"},{\"id\":\"item13\",\"collapsed\":1,\"order\":3,\"column\":\"l\"},{\"id\":\"item21\",\"collapsed\":1,\"order\":4,\"column\":\"l\"},{\"id\":\"item28\",\"collapsed\":1,\"order\":5,\"column\":\"l\"},{\"id\":\"item7\",\"collapsed\":1,\"order\":0,\"column\":\"r\"},{\"id\":\"item20\",\"collapsed\":1,\"order\":1,\"column\":\"r\"},{\"id\":\"item15\",\"collapsed\":1,\"order\":2,\"column\":\"r\"},{\"id\":\"item18\",\"collapsed\":1,\"order\":3,\"column\":\"r\"},{\"id\":\"item14\",\"collapsed\":1,\"order\":4,\"column\":\"r\"}]} question is how can i find out column_id or order im a little bit confused

    Read the article

  • Codeigniter: Controller URI with Library

    - by Kevin Brown
    I have a working controller and library function, but I now need to pass a URI segment to the library for decision making, and I'm stuck. Controller: function survey($method) { $id = $this->session->userdata('id'); $data['member'] = $this->home_model->getUser($id); //Convert the db Object to a row array $data['manager'] = $data['member']->row(); $manager_id = $data['manager']->manager_id; $data['manager'] = $this->home_model->getUser($manager_id); $data['manager'] = $data['manager']->row(); if ($data['manager']->credits == '0') { flashMsg('warning',"You can't complete the assessment until your manager has purchased credit."); redirect('home','location'); } elseif ($data['manager']->test_complete == '3'){ flashMsg('warning',"You already completed the Assessment."); redirect('home','location'); } else{ $data['header'] = "Home"; $this->survey_form_processing->survey_form($this->_container,$data); } } Library: function survey_form($container) { if($method ==1){ $id = $this->CI->session->userdata('id'); // Setup fields for($i=1;$i<18;$i++){ $fields["a_".$i] = 'Question '.$i; } for($i=1;$i<17;$i++){ $fields["b_".$i] = 'Question '.$i; } $fields["company_name"] = "Company Name"; $fields['company_address'] = "company_address"; $fields['company_phone'] = "company_phone"; $fields['company_state'] = "company_state"; $fields['company_city'] = "company_city"; $fields['company_zip'] = "company_zip"; $fields['job_title'] = "job_title"; $fields['job_type'] = "job_type"; $fields['job_time'] = "job_time"; $fields['department'] = "department"; $fields['supervisor'] = "supervisor"; $fields['vision'] = "vision"; $fields['height'] = "height"; $fields['weight'] = "weight"; $fields['hand_dominance'] = "hand_dominance"; $fields['areas_of_fatigue'] = "areas_of_fatigue"; $fields['injury_review'] = "injury_review"; $fields['job_positive'] = "job_positive"; $fields['risk_factors'] = "risk_factors"; $fields['job_improvement_short'] = "job_improvement_short"; $fields['job_improvement_long'] = "job_improvement_long"; $fields["c_1"] = "Near Lift"; $fields["c_2"] = "Middle Lift"; $fields["c_3"] = "Far Lift"; $this->CI->validation->set_fields($fields); // Set Rules for($i=1;$i<18;$i++){ $rules["a_".$i]= 'hour|integer|max_length[2]'; } for($i=1;$i<17;$i++){ $rules["b_".$i]= 'hour|integer|max_length[2]'; } // Setup form default values $this->CI->validation->set_rules($rules); if ( $this->CI->validation->run() === FALSE ) { // Output any errors $this->CI->validation->output_errors(); } else { // Submit form $this->_submit(); } // Modify form, first load $this->CI->db->from('be_user_profiles'); $this->CI->db->where('user_id' , $id); $user = $this->CI->db->get(); $this->CI->db->from('be_survey'); $this->CI->db->where('user_id' , $id); $survey = $this->CI->db->get(); $user = array_merge($user->row_array(),$survey->row_array()); $this->CI->validation->set_default_value($user); // Display page $data['user'] = $user; $data['header'] = 'Risk Assessment Survey'; $data['page'] = $this->CI->config->item('backendpro_template_public') . 'form_survey'; $this->CI->load->view($container,$data); } else{ redirect('home','location'); } } My library function doesn't know what to do with Method...and I'm confused. Does it have something to do with instances in my library?

    Read the article

  • Big o notation runtime

    - by mrblippy
    Hi, i have been given some code to work out big o runtimes on them, could someone tell me if i am on the right track or not?? //program1 int i, count = 0, n = 20000; for(i = 0; i < n * n; i++) { count++; } Is that O(n^2)??? //number2 int i, inner_count = 0, n = 2000000000; for(i = 0; i < n; i++) { inner_count++; } is this one O(n)???? //number3 for(i = 0; i < n; i++) { for(j = 0; j < n; j++) { count++; } } O(n^2)????? //number4 for(i = 0; i < n; i++) { for(j = 0; j < i; j++) { for(k = 0; k < j; k++) { inner_count++; } } } is that O(n^3)????? //number5 int i, j, inner_count = 0, n = 30000; for(i = 0; i < n; i++) { for(j = 0; j < i; j++) { inner_count++; } } is that one O(n^3)? //number6 int i, j, k, l, pseudo_inner_count = 0, n = 25; for(i = 0; i < n; i++) { for(j = 0; j < i*i; j++) { for(k = 0; k < i*j; k++) { pseudo_inner_count++; for(l = 0; l < 10; l++); } } } very confused about this one O(n^3)?? //number7 int i, j, k, pseudo_inner_count = 0, n = 16; for(i = n; i > 1; i /= 2) { for(j = 0; j < n; j++) { pseudo_inner_count++; for(k = 0; k < 50000000; k++); } } o(n)???? (i get more lost as they get harder) //number8 int i, j, pseudo_inner_count = 0, n = 1073741824; for(i = n; i > 1; i /= 2) { pseudo_inner_count++; for(j = 0; j < 50000000; j++); } O(n^2)??? If anyone could clarify these and help me understand them better i would be very grateful -cheers

    Read the article

  • CookieContainer bug?

    - by Salar
    I'm confused how CookieContainer handles domain, so I create this test. This test shows cookieContainer doesn't return any cookie for "site.com" but according to RFC it should return at least 2 cookies. Isn't it a bug? How make it to work? Here is a discussion about this bug: http://social.msdn.microsoft.com/Forums/en-US/ncl/thread/c4edc965-2dc2-4724-8f08-68815cf1dce6 <%@ Page Language="C#" %> <%@ Import Namespace="System.Net" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <script runat="server"> CookieContainer getContainer() { CookieContainer result = new CookieContainer(); Uri uri = new Uri("http://sub.site.com"); string cookieH = @"Test1=val; domain=sub.site.com; path=/"; result.SetCookies(uri, cookieH); cookieH = @"Test2=val; domain=.site.com; path=/"; result.SetCookies(uri, cookieH); cookieH = @"Test3=val; domain=site.com; path=/"; result.SetCookies(uri, cookieH); return result; } void Test() { CookieContainer cookie = getContainer(); lblResult.Text += "<br>Total cookies count: " + cookie.Count + " &nbsp;&nbsp; expected: 3"; Uri uri = new Uri("http://sub.site.com"); CookieCollection coll = cookie.GetCookies(uri); lblResult.Text += "<br>For " + uri + " Cookie count: " + coll.Count + " &nbsp;&nbsp; expected: 2"; uri = new Uri("http://other.site.com"); coll = cookie.GetCookies(uri); lblResult.Text += "<br>For " + uri + " Cookie count: " + coll.Count + " &nbsp;&nbsp; expected: 2"; uri = new Uri("http://site.com"); coll = cookie.GetCookies(uri); lblResult.Text += "<br>For " + uri + " Cookie count: " + coll.Count + " &nbsp;&nbsp; expected: 2"; } protected void Page_Load(object sender, EventArgs e) { Test(); } </script> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title>CookieContainer Test Page</title> </head> <body> <form id="frmTest" runat="server"> <asp:Label ID="lblResult" EnableViewState="false" runat="server"></asp:Label> </form> </body> </html>

    Read the article

  • How to select table column names in a view and pass to controller in rails?

    - by zachd1_618
    So I am new to Rails, and OO programming in general. I have some grasp of the MVC architecture. My goal is to make a (nearly) completely dynamic plug-and-play plotting web server. I am fairly confused with params, forms, and select helpers. What I want to do is use Rails drop downs to basically pass parameters as strings to my controller, which will use the params to select certain column data from my database and plot it dynamically. I have the latter part of the task working, but I can't seem to pass values from my view to controller. For simplicity's sake, say my database schema looks like this: --------------Plot--------------- |____x____|____y1____|____y2____| | 1 | 1 | 1 | | 2 | 2 | 4 | | 3 | 3 | 9 | | 4 | 4 | 16 | | 5 | 5 | 25 | ... and in my Model, I have dynamic selector scopes that will let me select just certain columns of data: in Plot.rb class Plot < ActiveRecord::Base scope :select_var, lambda {|varname| select(varname)} scope :between_x, lambda {|x1,x2| where("x BETWEEN ? and ?","#{x1}","#{x2}")} So this way, I can call: irb>>@p1 = Plot.select_var(['x','y1']).between_x(1,3) and get in return a class where @p1.x and @p1.y1 are my only attributes, only for values between x=1 to x=4, which I dynamically plot. I want to start off in a view (plot/index), where I can dynamically select which variable names (table column names), and which rows from the database to fetch and plot. The problem is, most select helpers don't seem to work with columns in the database, only rows. So to select columns, I first get an array of column names that exist in my database with a function I wrote. Plots Controller def index d=Plot.first @tags = d.list_vars end So @tags = ['x','y1','y2'] Then in my plot/index.html.erb I try to use a drop down to select wich variables I send back to the controller. index.html.erb <%= select_tag( :variable, options_for_select(@plots.first.list_vars,:name,:multiple=>:true) )%> <%= button_to 'Plot now!', :controller =>"plots/plot_vars", :variable => params[:variable]%> Finally, in the controller again Plots controller ... def plot_vars @plot_data=Plot.select_vars([params[:variable]]) end The problem is everytime I try this (or one of a hundred variations thereof), the params[:variable] is nill. How can I use a drop down to pass a parameter with string variable names to the controller? Sorry its so long, I have been struggling with this for about a month now. :-( I think my biggest problem is that this setup doesn't really match the Rails architecture. I don't have "users" and "articles" as individual entities. I really have a data structure, not a data object. Trying to work with the structure in terms of data object speak is not necessarily the easiest thing to do I think. For background: My actual database has about 250 columns and a couple million rows, and they get changed and modified from time to time. I know I can make the database smarter, but its not worth it on my end. I work at a scientific institute where there are a ton of projects with databases just like this. Each one has a web developer that spends months setting up a web interface and their own janky plotting setups. I want to make this completely dynamic, as a plug-and-play solution so all you have to do is specify your database connection, and this rails setup will automatically show and plot which data you want in it. I am more of a sequential programmer and number cruncher, as are many people here. I think this project could be very helpful in the end, but its difficult to figure out for me right now.

    Read the article

  • XHTML 1.0 DocType ignored in all browsers?

    - by John
    I was testing this, since I understood using XHTML let me use any valid XML for empty <div> elements: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html> <head> <meta http-equiv="Content-Type" content="text/html;charset=utf-8"> <title>Test</title> </head> <body> <div style="border:solid 10px black; width:100px; height:100px"></div> <div style="border:solid 10px red; width:100px; height:100px"></div> <div style="border:solid 10px blue; width:100px; height:100px"></div> <div style="border:solid 10px black; width:100px; height:100px" /> <div style="border:solid 10px red; width:100px; height:100px" /> <div style="border:solid 10px blue; width:100px; height:100px" /> </body> </html> It doesn't work in any browser I try... this is how FireBug tells me it understands the document: <html> <head> <meta content="text/html;charset=utf-8" http-equiv="Content-Type"/> <title>Test</title> </head> <body> <div style="border: 10px solid black; width: 100px; height: 100px;"/> <div style="border: 10px solid red; width: 100px; height: 100px;"/> <div style="border: 10px solid blue; width: 100px; height: 100px;"/> <div style="border: 10px solid black; width: 100px; height: 100px;"> <div style="border: 10px solid red; width: 100px; height: 100px;"> <div style="border: 10px solid blue; width: 100px; height: 100px;"/> </div> </div> </body> </html> I'm a bit confused what the point is of using XHTML if I have to do this, I might as well just use HTML? Note, that setting the content type to content="application/xhtml+xml" makes no difference in FF3 at least.

    Read the article

  • Please clarify how create/update happens against child entities of an aggregate root

    - by christian
    After much reading and thinking as I begin to get my head wrapped around DDD, I am a bit confused about the best practices for dealing with complex hierarchies under an aggregate root. I think this is a FAQ but after reading countless examples and discussions, no one is quite talking about the issue I'm seeing. If I am aligned with the DDD thinking, entities below the aggregate root should be immutable. This is the crux of my trouble, so if that isn't correct, that is why I'm lost. Here is a fabricated example...hope it holds enough water to discuss. Consider an automobile insurance policy (I'm not in insurance, but this matches the language I hear when on the phone w/ my insurance company). Policy is clearly an entity. Within the policy, let's say we have Auto. Auto, for the sake of this example, only exists within a policy (maybe you could transfer an Auto to another policy, so this is potential for an aggregate as well, which changes Policy...but assume it simpler than that for now). Since an Auto cannot exist without a Policy, I think it should be an Entity but not a root. So Policy in this case is an aggregate root. Now, to create a Policy, let's assume it has to have at least one auto. This is where I get frustrated. Assume Auto is fairly complex, including many fields and maybe a child for where it is garaged (a Location). If I understand correctly, a "create Policy" constructor/factory would have to take as input an Auto or be restricted via a builder to not be created without this Auto. And the Auto's creation, since it is an entity, can't be done beforehand (because it is immutable? maybe this is just an incorrect interpretation). So you don't get to say new Auto and then setX, setY, add(Z). If Auto is more than somewhat trivial, you end up having to build a huge hierarchy of builders and such to try to manage creating an Auto within the context of the Policy. One more twist to this is later, after the Policy is created and one wishes to add another Auto...or update an existing Auto. Clearly, the Policy controls this...fine...but Policy.addAuto() won't quite fly because one can't just pass in a new Auto (right!?). Examples say things like Policy.addAuto(VIN, make, model, etc.) but are all so simple that that looks reasonable. But if this factory method approach falls apart with too many parameters (the entire Auto interface, conceivably) I need a solution. From that point in my thinking, I'm realizing that having a transient reference to an entity is OK. So, maybe it is fine to have a entity created outside of its parent within the aggregate in a transient environment, so maybe it is OK to say something like: auto = AutoFactory.createAuto(); auto.setX auto.setY or if sticking to immutability, AutoBuilder.new().setX().setY().build() and then have it get sorted out when you say Policy.addAuto(auto) This insurance example gets more interesting if you add Events, such as an Accident with its PolicyReports or RepairEstimates...some value objects but most entities that are all really meaningless outside the policy...at least for my simple example. The lifecycle of Policy with its growing hierarchy over time seems the fundamental picture I must draw before really starting to dig in...and it is more the factory concept or how the child entities get built/attached to an aggregate root that I haven't seen a solid example of. I think I'm close. Hope this is clear and not just a repeat FAQ that has answers all over the place.

    Read the article

  • Trying to reinvent the wheel of StackOverflow to have a good learning experience. Need some suggesti

    - by Legend
    I want to learn and am not able to do it unless I have a real "mission" to complete. SO is my favorite and I can't imagine a better experience than actually recreating it but not in ASP. I'd like to use PHP+MySQL+jQuery. So far, I have been a self-taught programmer but I would like to master one paradigm that forces you to adhere to the standards. For instance, recently, jQuery forced me to use some "rules". The plugins were supposed to be written in a particular way and that's it. When I started off, everything seemed like Greek and Latin but when I finished a very small plugin, I felt really happy because it forced me to program in a certain way. I am looking for something like this only in a larger project. I've heard a lot about MVC and all but I am confused about the various frameworks out there. Zend seems awesome but looks heavy at the same time and also requires you to have a lot more control over the web-server whereas CakePHP is a good and a fast framework that needs only little control. Do I use one of these or just write my own MVC? I have the following goals: Goals: Site should be fast - I know this depends on my coding skills but I will learn on my way. The framework itself should not slow me down) Setting up the site should not require you to use command-line - This requirement is ok during development. But some frameworks like Symphony require you to initialize certain things through command-line Should support pluggable modules - For instance, if I want to be able to use the FCK editor, I should be able to organize things in a good way. Should be possible to extend - For instance, SO is mainly a Q&A site but I should be able to logically extend it into an Idea Management System (optional but I'm curious). This goes more into code re-usability I guess. I am comfortable with MySQL so I should be done with database design etc. with some serious effort. As for PHP, I can write code on my own but haven't really used any frameworks that much. jQuery, I started off recently and love it. I would be glad if someone can guide me during these initial steps. Precisely, when designing something like SO, I have the following questions: Do I use a framework? If yes, should it be MVC? If MVC, which one is a good and a scalable one? (I'd love something like jQuery that will not die anytime soon) How do I balance the functionality? The same logic can sometimes be made server centric or client centric. (more Ajax?). Is it a good idea to make a heavy javascript site considering the recent advances on client-side JS processing? Just in case anyone is wondering, I am not interested in commercializing any of this. I need a reason to learn something :)

    Read the article

  • Why does one of two identical Javascripts work in Firefox?

    - by Gigpacknaxe
    Hi, I have two image swap functions and one works in Firefox and the other does not. The swap functions are identical and both work fine in IE. Firefox does not even recognize the images as hyperlinks. I am very confused and I hope some one can shed some light on this for me. Thank you very much in advance for any and all help. FYI: the working script swaps by onClick via DIV elements and the non-working script swaps onMouseOver/Out via "a" elements. Remember both of these work just fine in IE. Joshua Working Javascript in FF: <script type="text/javascript"> var aryImages = new Array(); aryImages[1] = "/tires/images/mich_prim_mxv4_profile.jpg"; aryImages[2] = "/tires/images/mich_prim_mxv4_tread.jpg"; aryImages[3] = "/tires/images/mich_prim_mxv4_side.jpg"; for (i=0; i < aryImages.length; i++) { var preload = new Image(); preload.src = aryImages[i]; } function swap(imgIndex, imgTarget) { document[imgTarget].src = aryImages[imgIndex]; } <div id="image-container"> <div style="text-align: right">Click small images below to view larger.</div> <div class="thumb-box" onclick="swap(1, 'imgColor')"><img src="/tires/images/thumbs/mich_prim_mxv4_profile_thumb.jpg" width="75" height="75" /></div> <div class="thumb-box" onclick="swap(2, 'imgColor')"><img src="/tires/images/thumbs/mich_prim_mxv4_tread_thumb.jpg" width="75" height="75" /></div> <div class="thumb-box" onclick="swap(3, 'imgColor')"><img src="/tires/images/thumbs/mich_prim_mxv4_side_thumb.jpg" width="75" height="75" /></div> <div><img alt="" name="imgColor" src="/tires/images/mich_prim_mxv4_profile.jpg" /></div> <div><a href="mich-prim-102-large.php"><img src="/tires/images/super_view.jpg" border="0" /></a></div> Not Working in FF: <script type="text/javascript"> var aryImages = new Array(); aryImages[1] = "/images/home-on.jpg"; aryImages[2] = "/images/home-off.jpg"; aryImages[3] = "/images/services-on.jpg"; aryImages[4] = "/images/services-off.jpg"; aryImages[5] = "/images/contact_us-on.jpg"; aryImages[6] = "/images/contact_us-off.jpg"; aryImages[7] = "/images/about_us-on.jpg"; aryImages[8] = "/images/about_us-off.jpg"; aryImages[9] = "/images/career-on.jpg"; aryImages[10] = "/images/career-off.jpg"; for (i=0; i < aryImages.length; i++) { var preload = new Image(); preload.src = aryImages[i]; } function swap(imgIndex, imgTarget) { document[imgTarget].src = aryImages[imgIndex]; } <td> <a href="home.php" onMouseOver="swap(1, 'home')" onMouseOut="swap(2, 'home')"><img name="home" src="/images/home-off.jpg" alt="Home Button" border="0px" /></a> </td>

    Read the article

  • Zend models and database relathionships

    - by user608341
    Hi people, i'm starting with Zend Framework and I'm a little bit confused with models and relathionships (one-to-many, many-to-many etc). The "Zend Framework Quick Start" says to create a Zend_Db_Table, a Data Mapper and finally our model class Suppose we have a database like this: table A ( id integer primary key, name varchar(50) ); table B ( id integer primary key, a_id integer references A ); then, i'll create: Application_Model_DbTable_A extends Zend_Db_Table_Abstract, Application_Model_AMapper, Application_Model_A, Application_Model_DbTable_B extends Zend_Db_Table_Abstract, Application_Model_BMapper, Application_Model_B, if I understood, i've to store the references informations in Application_Model_DbTable_A: protected $_dependentTables = array('B'); and Application_Model_DbTable_B: protected $_referenceMap = array( 'A' => array( 'columns' => array('a_id'), 'refTableClass' => 'A', 'refColums' => array('id') ) ); and my models class: class Application_Model_A { protected $_id; protected $_name; public function __construct(array $options = null) { if(is_array($options)) { $this->setOptions($options); } } public function __set($name, $value) { $method = 'set' . $name; if (('mapper' == $name) || !method_exists($this, $method)) { throw new Exception('Invalid property'); } $this->$method($value); } public function __get($name) { $method = 'get' . $name; if (('mapper' == $name) || !method_exists($this, $method)) { throw new Exception('Invalid property'); } return $this->$method(); } public function setOptions(array $options) { $methods = get_class_methods($this); foreach ($options as $key => $value) { $method = 'set' . ucfirst($key); if (in_array($method, $methods)) { $this->$method($value); } } return $this; } public function setName($name) { $this->_name = (string) $name; return $this; } public function getName() { return $this->_name; } public function setId($id) { $this->_id = (int) $id; return $this; } public function getId() { return $this->_id; } class Application_Model_B { protected $_id; protected $_a_id; public function __construct(array $options = null) { if(is_array($options)) { $this->setOptions($options); } } public function __set($name, $value) { $method = 'set' . $name; if (('mapper' == $name) || !method_exists($this, $method)) { throw new Exception('Invalid property'); } $this->$method($value); } public function __get($name) { $method = 'get' . $name; if (('mapper' == $name) || !method_exists($this, $method)) { throw new Exception('Invalid property'); } return $this->$method(); } public function setOptions(array $options) { $methods = get_class_methods($this); foreach ($options as $key => $value) { $method = 'set' . ucfirst($key); if (in_array($method, $methods)) { $this->$method($value); } } return $this; } public function setA_id($a_id) { $this->_a_id = (int) $a_id; return $this; } public function getA_id() { return $this->_a_id; } public function setId($id) { $this->_id = (int) $id; return $this; } public function getId() { return $this->_id; } it's that right?

    Read the article

  • Testing shared memory ,strange thing happen

    - by barfatchen
    I have 2 program compiled in 4.1.2 running in RedHat 5.5 , It is a simple job to test shared memory , shmem1.c like following : #define STATE_FILE "/program.shared" #define NAMESIZE 1024 #define MAXNAMES 100 typedef struct { char name[MAXNAMES][NAMESIZE]; int heartbeat ; int iFlag ; } SHARED_VAR; int main (void) { int first = 0; int shm_fd; static SHARED_VAR *conf; if((shm_fd = shm_open(STATE_FILE, (O_CREAT | O_EXCL | O_RDWR), (S_IREAD | S_IWRITE))) > 0 ) { first = 1; /* We are the first instance */ } else if((shm_fd = shm_open(STATE_FILE, (O_CREAT | O_RDWR), (S_IREAD | S_IWRITE))) < 0) { printf("Could not create shm object. %s\n", strerror(errno)); return errno; } if((conf = mmap(0, sizeof(SHARED_VAR), (PROT_READ | PROT_WRITE), MAP_SHARED, shm_fd, 0)) == MAP_FAILED) { return errno; } if(first) { for(idx=0;idx< 1000000000;idx++) { conf->heartbeat = conf->heartbeat + 1 ; } } printf("conf->heartbeat=(%d)\n",conf->heartbeat) ; close(shm_fd); shm_unlink(STATE_FILE); exit(0); }//main And shmem2.c like following : #define STATE_FILE "/program.shared" #define NAMESIZE 1024 #define MAXNAMES 100 typedef struct { char name[MAXNAMES][NAMESIZE]; int heartbeat ; int iFlag ; } SHARED_VAR; int main (void) { int first = 0; int shm_fd; static SHARED_VAR *conf; if((shm_fd = shm_open(STATE_FILE, (O_RDWR), (S_IREAD | S_IWRITE))) < 0) { printf("Could not create shm object. %s\n", strerror(errno)); return errno; } ftruncate(shm_fd, sizeof(SHARED_VAR)); if((conf = mmap(0, sizeof(SHARED_VAR), (PROT_READ | PROT_WRITE), MAP_SHARED, shm_fd, 0)) == MAP_FAILED) { return errno; } int idx ; for(idx=0;idx< 1000000000;idx++) { conf->heartbeat = conf->heartbeat + 1 ; } printf("conf->heartbeat=(%d)\n",conf->heartbeat) ; close(shm_fd); exit(0); } After compiled : gcc shmem1.c -lpthread -lrt -o shmem1.exe gcc shmem2.c -lpthread -lrt -o shmem2.exe And Run both program almost at the same time with 2 terminal : [test]$ ./shmem1.exe First creation of the shm. Setting up default values conf->heartbeat=(840825951) [test]$ ./shmem2.exe conf->heartbeat=(1215083817) I feel confused !! since shmem1.c is a loop 1,000,000,000 times , how can it be possible to have a answer like 840,825,951 ? I run shmem1.exe and shmem2.exe this way,most of the results are conf-heartbeat will larger than 1,000,000,000 , but seldom and randomly , I will see result conf-heartbeat will lesser than 1,000,000,000 , either in shmem1.exe or shmem2.exe !! if run shmem1.exe only , it is always print 1,000,000,000 , my question is , what is the reason cause conf-heartbeat=(840825951) in shmem1.exe ? Update: Although not sure , but I think I figure it out what is going on , If shmem1.exe run 10 times for example , then conf-heartbeat = 10 , in this time shmem1.exe take a rest and then back , shmem1.exe read from shared memory and conf-heartbeat = 8 , so shmem1.exe will continue from 8 , why conf-heartbeat = 8 ? I think it is because shmem2.exe update the shared memory data to 8 , shmem1.exe did not write 10 back to shared memory before it took a rest ....that is just my theory... i don't know how to prove it !!

    Read the article

  • DropDownList not working 100% in ASP.Net MVC

    - by Program.X
    I am quite confused with how to effectively use the Html.DropDownList helper for ASP.NET MVC. Background: I have a 5-page form, which saves data to the form each time "Next" is clicked. Users may navigate back and forth between sections, so previous sections will already be pre-populated with previously-entered data. This works for TextBoxes. But not DropDownLists. I have tried a load of different methods, including: http://stackoverflow.com/questions/867117/how-to-add-static-list-of-items-in-mvc-html-dropdownlist/867218#867218 http://stackoverflow.com/questions/2080802/setting-selected-item-to-dropdownlist-in-mvc-application I have a ViewModel such taht I have got my lists and my Model (a LINQ-to-SQL generated class) as properties. eg: public class ConsultantRegistrationFormViewModel { public IConsultantRegistration ConsultantRegistration { get; private set; } public SelectList Titles { get; private set; } public SelectList Countries { get; private set; } public SelectList Currencies { get; private set; } public int CurrentSection { get; private set; } private ConsultantRegistrationFormViewModel(IConsultantRegistration consultantRegistration) { ConsultantRegistration = consultantRegistration; CurrentSection = 1; Titles = new SelectList(new string[] { "Mr", "Mrs", "Miss", "Ms", "Dr", "Sir" }); Countries = new SelectList(countries.Select(q => q.Name)); Currencies = new SelectList(currencies,"CurrencyCode","FriendlyForm"); } } My Controller's Edit Action on GET looks like: public class ConsultantRegistrationController : Controller { public IConsultantRegistrationRepository ConsultantRegistrationRepository { get; private set; } public ICountryRepository CountryRepository { get; private set; } public IEnumerable<ICountry> Countries { get; private set; } public ConsultantRegistrationController() { ConsultantRegistrationRepository = RepositoryFactory.CreateConsultantRegistrationRepository(); CountryRepository = RepositoryFactory.CreateCountryRepository(); Countries = CountryRepository.GetCountries().ToArray(); } public ActionResult Edit(Guid id, int sectionIndex) { IConsultantRegistration consultantRegistration = ConsultantRegistrationRepository.GetConsultantRegistration(id); SelectList bankBranchCountriesSelectList = new SelectList(Countries, "BankBranchCountry", "CountryName", consultantRegistration.BankBranchCountry); ViewData["bankBranchCountrySelectList"] = bankBranchCountriesSelectList; return View(new ConsultantRegistrationFormViewModel(consultantRegistration,sectionIndex, Countries,Currencies)); } } With my View doing: <%: Html.DropDownList("ConsultantRegistration.BankBranchCountry",ViewData["bankBranchCountrySelectList"] as SelectList) %> This gives me the error: DataBinding: 'IWW.ArrowPay.ConsultantRegistration.Data.Country' does not contain a property with the name 'BankBranchCountry'. Which it does, have a look at the schema of this property: public interface IConsultantRegistration { Guid ID { get; set; } [DisplayName("Branch Country")] string BankBranchCountry { get; set; } } (My LINQ-to-SQL type ConsultantRegistration implemented IConsultantRegistration) It seems that it is trying to bind to the wrong type, though? If I use this in my view (and use my Controller's Countries property): <%: Html.DropDownList("ConsultantRegistration.BankBranchCountry ",Model.Countries,"(select a Country)") %> I get the saved value fine, but my model doesn't update on POST. And if I use this in my view: <%: Html.DropDownListFor(model=>model.ConsultantRegistration.BankBranchCountry ",Model.Countries,"(select a Country)") %> I get the list, and it POSTs the selected value back, but does not pre-select the currently selected item in my model on the view. So I have a bit of the solution all over the place, but not all in one place. Hope you can help fill in my ignorance.

    Read the article

  • Upgrading Team Foundation Server 2008 to 2010

    - by Martin Hinshelwood
    I am sure you will have seen my posts on upgrading our internal Team Foundation Server from TFS2008 to TFS2010 Beta 2, RC and RTM, but what about a fresh upgrade of TFS2008 to TFS2010 using the RTM version of TFS. One of our clients is taking the plunge with TFS2010, so I have the job of doing the upgrade. It is sometimes very useful to have a team member that starts work when most of the Sydney workers are heading home as I can do the upgrade without impacting them. The down side is that if you have any blockers then you can be pretty sure that everyone that can deal with your problem is asleep I am starting with an existing blank installation of TFS 2010, but Adam Cogan let slip that he was the one that did the install so I thought it prudent to make sure that it was OK. Verifying Team Foundation Server 2010 We need to check that TFS 2010 has been installed correctly. First, check the Admin console and have a root about for any errors. Figure: Even the SQL Setup looks good. I don’t know how Adam did it! Backing up the Team Foundation Server 2008 Databases As we are moving from one server to another (recommended method) we will be taking a backup of our TFS2008 databases and resorting them to the SQL Server for the new TFS2010 Server. Do not just detach and reattach. This will cause problems with the version of the database. If you are running a test migration you just need to create a backup of the TFS 2008 databases, but if you are doing the live migration then you should stop IIS on the TFS 2008 server before you backup the databases. This will stop any inadvertent check-ins or changes to TFS 2008. Figure: Stop IIS before you take a backup to prevent any TFS 2008 changes being written to the database. It is good to leave a little time between taking the TFS 2008 server offline and commencing the upgrade as there is always one developer who has not finished and starts screaming. This time it was John Liu that needed 10 more minutes to make his changes and check-in, so I always give it 30 minutes and see if anyone screams. John Liu [SSW] said:   are you doing something to TFS :-O MrHinsh [SSW UK][VS ALM MVP] said:   I have stopped TFS 2008 as per my emails John Liu [SSW] said:   haven't finish check in @_@   can we have it for 10mins? :) MrHinsh [SSW UK][VS ALM MVP] said:   TFS 2008 has been started John Liu [SSW] said:   I love you! -IM conversation at TFS Upgrade +25 minutes After John confirmed that he had everything done I turned IIS off again and made a cup of tea. There were no more screams so the upgrade can continue. Figure: Backup all of the databases for TFS and include the Reporting Services, just in case.   Figure: Check that all the backups have been taken Once you have your backups, you need to copy them to your new TFS2010 server and restore them. This is a good way to proceed as if we have any problems, or just plain run out of time, then you just turn the TFS 2008 server back on and all you have lost is one upgrade day, and not 10 developer days. As per the rules, you should record the number of files and the total number of areas and iterations before the upgrade so you have something to compare to: TFS2008 File count: Type Count 1 1845 2 15770 Areas & Iterations: 139 You can use this to verify that the upgrade was successful. it should however be noted that the numbers in TFS 2010 will be bigger. This is due to some of the sorting out that TFS does during the upgrade process. Restore Team Foundation Server 2008 Databases Restoring the databases is much more time consuming than just attaching them as you need to do them one at a time. But you may be taking a backup of an operational database and need to restore all your databases to a particular point in time instead of to the latest. I am doing latest unless I encounter any problems. Figure: Restore each of the databases to either a latest or specific point in time.     Figure: Restore all of the required databases Now that all of your databases are restored you now need to upgrade them to Team Foundation Server 2010. Upgrade Team Foundation Server 2008 Databases This is probably the easiest part of the process. You need to call a fire and forget command that will go off to the database specified, find the TFS 2008 databases and upgrade them to 2010. During this process all of the 6 main TFS 2008 databases are merged into the TfsVersionControl database, upgraded and then the database is renamed to TFS_[CollectionName]. The rename is only the database and not the physical files, so it is worth going back and renaming the physical file as well. This keeps everything neat and tidy. If you plan to keep the old TFS 2008 server around, for example if you are doing a test migration first, then you will need to change the TFS GUID. This GUID is unique to each TFS instance and is preserved when you upgrade. This GUID is used by the clients and they can get a little confused if there are two servers with the same one. To kick of the upgrade you need to open a command prompt and change the path to “C:\Program Files\Microsoft Team Foundation Server 2010\Tools” and run the “import” command in  “tfsconfig”. TfsConfig import /sqlinstance:<Previous TFS Data Tier>                  /collectionName:<Collection Name>                  /confirmed Imports a TFS 2005 or 2008 data tier as a new project collection. Important: This command should only be executed after adequate backups have been performed. After you import, you will need to configure portal and reporting settings via the administration console. EXAMPLES -------- TfsConfig import /sqlinstance:tfs2008sql /collectionName:imported /confirmed TfsConfig import /sqlinstance:tfs2008sql\Instance /collectionName:imported /confirmed OPTIONS: -------- sqlinstance         The sql instance of the TFS 2005 or 2008 data tier. The TFS databases at that location will be modified directly and will no longer be usable as previous version databases.  Ensure you have back-ups. collectionName      The name of the new Team Project Collection. confirmed           Confirm that you have backed-up databases before importing. This command will automatically look for the TfsIntegration database and verify that all the other required databases exist. In this case it took around 5 minutes to complete the upgrade as the total database size was under 700MB. This was unlike the upgrade of SSW’s production database with over 17GB of data which took a few hours. At the end of the process you should get no errors and no warnings. The Upgrade operation on the ApplicationTier feature has completed. There were 0 errors and 0 warnings. As this is a new server and not a pure upgrade there should not be a problem with the GUID. If you think at any point you will be doing this more than once, for example doing a test migration, or merging many TFS 2008 instances into a single one, then you should go back and rename the physical TfsVersionControl.mdf file to the same as the new collection. This will avoid confusion later down the line. To do this, detach the new collection from the server and rename the physical files. Then reattach and change the physical file locations to match the new name. You can follow http://www.mssqltips.com/tip.asp?tip=1122 for a more detailed explanation of how to do this. Figure: Stop the collection so TFS does not take a wobbly when we detach the database. When you try to start the new collection again you will get a conflict with project names and will require to remove the Test Upgrade collection. This is fine and it just needs detached. Figure: Detaching the test upgrade from the new Team Foundation Server 2010 so we can start the new Collection again. You will now be able to start the new upgraded collection and you are ready for testing. Do you remember the stats we took off the TFS 2008 server? TFS2008 File count: Type Count 1 1845 2 15770 Areas & Iterations: 139 Well, now we need to compare them to the TFS 2010 stats, remembering that there will probably be more files under source control. TFS2010 File count: Type Count 1 19288 Areas & Iterations: 139 Lovely, the number of iterations are the same, and the number of files is bigger. Just what we were looking for. Testing the upgraded Team Foundation Server 2010 Project Collection Can we connect to the new collection and project? Figure: We can connect to the new collection and project.   Figure: make sure you can connect to The upgraded projects and that you can see all of the files. Figure: Team Web Access is there and working. Note that for Team Web Access you now use the same port and URL as for TFS 2010. So in this case as I am running on the local box you need to use http://localhost:8080/tfs which will redirect you to http://localhost:8080/tfs/web for the web access. If you need to connect with a Visual Studio 2008 client you will need to use the full path of the new collection, http://[servername]/tfs/[collectionname] and this will work with all of your collections. With Visual Studio 2005 you will only be able to connect to the Default collection and in both VS2008 and VS2005 you will need to install the forward compatibility updates. Visual Studio Team System 2005 Service Pack 1 Forward Compatibility Update for Team Foundation Server 2010 Visual Studio Team System 2008 Service Pack 1 Forward Compatibility Update for Team Foundation Server 2010 To make sure that you have everything up to date, make sure that you run SSW Diagnostics and get all green ticks. Upgrade Done! At this point you can send out a notice to everyone that the upgrade is complete and and give them the connection details. You need to remember that at this stage we have 2008 project upgraded to run under TFS 2010 but it is still running under that same process template that it was running before. You can only “enable” 2010 features in a process template you can’t upgrade. So what to do? Well, you need to create a new project and migrate things you want to keep across. Souse code is easy, you can move or Branch, but Work Items are more difficult as you can’t move them between projects. This instance is complicated more as the old project uses the Conchango/EMC Scrum for Team System template and I will need to write a script/application to get the work items across with their attachments in tact. That is my next task! Technorati Tags: TFS 2010,TFS 2008,VS ALM

    Read the article

  • SQL Server and Hyper-V Dynamic Memory Part 2

    - by SQLOS Team
    Part 1 of this series was an introduction and overview of Hyper-V Dynamic Memory. This part looks at SQL Server memory management and how the SQL engine responds to changing OS memory conditions.   Part 2: SQL Server Memory Management As with any Windows process, sqlserver.exe has a virtual address space (VAS) of 4GB on 32-bit and 8TB in 64-bit editions. Pages in its VAS are mapped to pages in physical memory when the memory is committed and referenced for the first time. The collection of VAS pages that have been recently referenced is known as the Working Set. How and when SQL Server allocates virtual memory and grows its working set depends on the memory model it uses. SQL Server supports three basic memory models:   1. Conventional Memory Model   The Conventional model is the default SQL Server memory model and has the following properties: - Dynamic - can grow or shrink its working set in response to load and external (operating system) memory conditions. - OS uses 4K pages – (not to be confused with SQL Server “pages” which are 8K regions of committed memory).- Pageable - Can be paged out to disk by the operating system.   2. Locked Page Model The locked page memory model is set when SQL Server is started with "Lock Pages in Memory" privilege*. It has the following characteristics: - Dynamic - can grow or shrink its working set in the same way as the Conventional model.- OS uses 4K pages - Non-Pageable – When memory is committed it is locked in memory, meaning that it will remain backed by physical memory and will not be paged out by the operating system. A common misconception is to interpret "locked" as non-dynamic. A SQL Server instance using the locked page memory model will grow and shrink (allocate memory and release memory) in response to changing workload and OS memory conditions in the same way as it does with the conventional model.   This is an important consideration when we look at Hyper-V Dynamic Memory – “locked” memory works perfectly well with “dynamic” memory.   * Note in “Denali” (Standard Edition and above), and in SQL 2008 R2 64-bit (Enterprise and above editions) the Lock Pages in Memory privilege is all that is required to set this model. In 2008 R2 64-Bit standard edition it also requires trace flag 845 to be set, in 2008 R2 32-bit editions it requires sp_configure 'awe enabled' 1.   3. Large Page Model The Large page model is set using trace flag 834 and potentially offers a small performance boost for systems that are configured with large pages. It is characterized by: - Static - memory is allocated at startup and does not change. - OS uses large (>2MB) pages - Non-Pageable The large page model is supported with Hyper-V Dynamic Memory (and Hyper-V also supports large pages), but you get no benefit from using Dynamic Memory with this model since SQL Server memory does not grow or shrink. The rest of this article will focus on the locked and conventional SQL Server memory models.   When does SQL Server grow? For “dynamic” configurations (Conventional and Locked memory models), the sqlservr.exe process grows – allocates and commits memory from the OS – in response to a workload. As much memory is allocated as is required to optimally run the query and buffer data for future queries, subject to limitations imposed by:   - SQL Server max server memory setting. If this configuration option is set, the buffer pool is not allowed to grow to more than this value. In SQL Server 2008 this value represents single page allocations, and in “Denali” it represents any size page allocations and also managed CLR procedure allocations.   - Memory signals from OS. The operating system sets a signal on memory resource notification objects to indicate whether it has memory available or whether it is low on available memory. If there is only 32MB free for every 4GB of memory a low memory signal is set, which continues until 64MB/4GB is free. If there is 96MB/4GB free the operating system sets a high memory signal. SQL Server only allocates memory when the high memory signal is set.   To summarize, for SQL Server to grow you need three conditions: a workload, max server memory setting higher than the current allocation, high memory signals from the OS.    When does SQL Server shrink caches? SQL Server as a rule does not like to return memory to the OS, but it will shrink its caches in response to memory pressure. Memory pressure can be divided into “internal” and “external”.   - External memory pressure occurs when the operating system is running low on memory and low memory signals are set. The SQL Server Resource Monitor checks for low memory signals approximately every 5 seconds and it will attempt to free memory until the signals stop.   To free memory SQL Server does the following: ·         Frees unused memory. ·         Notifies Memory Manager Clients to release memory o   Caches – Free unreferenced cache objects. o   Buffer pool - Based on oldest access times.   The freed memory is released back to the operating system. This process continues until the low memory resource notifications stop.    - Internal memory pressure occurs when the size of different caches and allocations increase but the SQL Server process needs to keep its total memory within a target value. For example if max server memory is set and certain caches are growing large, it will cause SQL to free memory for re-use internally, but not to release memory back to the OS. If you lower the value of max server memory you will generate internal memory pressure that will cause SQL to release memory back to the OS.    Memory pressure handling has not changed much since SQL 2005 and it was described in detail in a blog post by Slava Oks.   Note that SQL Server Express is an exception to the above behavior. Unlike other editions it does not assume it is the most important process running on the system but tries to be more “desktop” friendly. It will empty its working set after a period of inactivity.   How does SQL Server respond to changing OS memory?    In SQL Server 2005 support for Hot-Add memory was introduced. This feature, available in Enterprise and above editions, allows the server to make use of any extra physical memory that was added after SQL Server started. Being able to add physical memory when the system is running is limited to specialized hardware, but with the Hyper-V Dynamic Memory feature, when new memory is allocated to a guest virtual machine, it looks like hot-add physical memory to the guest. What this means is that thanks to the hot-add memory feature, SQL Server 2005 and higher can dynamically grow if more “physical” memory is granted to a guest VM by Hyper-V dynamic memory.   SQL Server checks OS memory every second and dynamically adjusts its “target” (based on available OS memory and max server memory) accordingly.   In “Denali” Standard Edition will also have sqlserver.exe support for hot-add memory when running virtualized (i.e. detecting and acting on Hyper-V Dynamic Memory allocations).   How does a SQL Server workload in a guest VM impact Hyper-V dynamic memory scheduling?   When a SQL workload causes the sqlserver.exe process to grow its working set, the Hyper-V memory scheduler will detect memory pressure in the guest VM and add memory to it. SQL Server will then detect the extra memory and grow according to workload demand. In our tests we have seen this feedback process cause a guest VM to grow quickly in response to SQL workload - we are still working on characterizing this ramp-up.    How does SQL Server respond when Hyper-V removes memory from a guest VM through ballooning?   If pressure from other VM's cause Hyper-V Dynamic Memory to take memory away from a VM through ballooning (allocating memory with a virtual device driver and returning it to the host OS), Windows Memory Manager will page out unlocked portions of memory and signal low resource notification events. When SQL Server detects these events it will shrink memory until the low memory notifications stop (see cache shrinking description above).    This raises another question. Can we make SQL Server release memory more readily and hence behave more "dynamically" without compromising performance? In certain circumstances where the application workload is predictable it may be possible to have a job which varies "max server memory" according to need, lowering it when the engine is inactive and raising it before a period of activity. This would have limited applicaability but it is something we're looking into.   What Memory Management changes are there in SQL Server “Denali”?   In SQL Server “Denali” (aka SQL11) the Memory Manager has been re-written to be more efficient. The main changes are summarized in this post. An important change with respect to Hyper-V Dynamic Memory support is that now the max server memory setting includes any size page allocations and managed CLR procedure allocations it now represents a closer approximation to total sqlserver.exe memory usage. This makes it easier to calculate a value for max server memory, which becomes important when configuring virtual machines to work well with Hyper-V Dynamic Memory Startup and Maximum RAM settings.   Another important change is no more AWE or hot-add support for 32-bit edition. This means if you're running a 32-bit edition of Denali you're limited to a 4GB address space and will not be able to take advantage of dynamically added OS memory that wasn't present when SQL Server started (though Hyper-V Dynamic Memory is still a supported configuration).   In part 3 we’ll develop some best practices for configuring and using SQL Server with Dynamic Memory. Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • Faceted search with Solr on Windows

    - by Dr.NETjes
    With over 10 million hits a day, funda.nl is probably the largest ASP.NET website which uses Solr on a Windows platform. While all our data (i.e. real estate properties) is stored in SQL Server, we're using Solr 1.4.1 to return the faceted search results as fast as we can.And yes, Solr is very fast. We did do some heavy stress testing on our Solr service, which allowed us to do over 1,000 req/sec on a single 64-bits Solr instance; and that's including converting search-url's to Solr http-queries and deserializing Solr's result-XML back to .NET objects! Let me tell you about faceted search and how to integrate Solr in a .NET/Windows environment. I'll bet it's easier than you think :-) What is faceted search? Faceted search is the clustering of search results into categories, allowing users to drill into search results. By showing the number of hits for each facet category, users can easily see how many results match that category. If you're still a bit confused, this example from CNET explains it all: The SQL solution for faceted search Our ("pre-Solr") solution for faceted search was done by adding a lot of redundant columns to our SQL tables and doing a COUNT(...) for each of those columns:   So if a user was searching for real estate properties in the city 'Amsterdam', our facet-query would be something like: SELECT COUNT(hasGarden), COUNT(has2Bathrooms), COUNT(has3Bathrooms), COUNT(etc...) FROM Houses WHERE city = 'Amsterdam' While this solution worked fine for a couple of years, it wasn't very easy for developers to add new facets. And also, performing COUNT's on all matched rows only performs well if you have a limited amount of rows in a table (i.e. less than a million). Enter Solr "Solr is an open source enterprise search server based on the Lucene Java search library, with XML/HTTP and JSON APIs, hit highlighting, faceted search, caching, replication, and a web administration interface." (quoted from Wikipedia's page on Solr) Solr isn't a database, it's more like a big index. Every time you upload data to Solr, it will analyze the data and create an inverted index from it (like the index-pages of a book). This way Solr can lookup data very quickly. To explain the inner workings of Solr is beyond the scope of this post, but if you want to learn more, please visit the Solr Wiki pages. Getting faceted search results from Solr is very easy; first let me show you how to send a http-query to Solr:    http://localhost:8983/solr/select?q=city:Amsterdam This will return an XML document containing the search results (in this example only three houses in the city of Amsterdam):    <response>     <result name="response" numFound="3" start="0">         <doc>            <long name="id">3203</long>            <str name="city">Amsterdam</str>            <str name="steet">Keizersgracht</str>            <int name="numberOfBathrooms">2</int>        </doc>         <doc>             <long name="id">3205</long>             <str name="city">Amsterdam</str>             <str name="steet">Vondelstraat</str>             <int name="numberOfBathrooms">3</int>          </doc>          <doc>             <long name="id">4293</long>             <str name="city">Amsterdam</str>             <str name="steet">Wibautstraat</str>             <int name="numberOfBathrooms">2</int>          </doc>       </result>   </response> By adding a facet-querypart for the field "numberOfBathrooms", Solr will return the facets for this particular field. We will see that there's one house in Amsterdam with three bathrooms and two houses with two bathrooms.    http://localhost:8983/solr/select?q=city:Amsterdam&facet=true&facet.field=numberOfBathrooms The complete XML response from Solr now looks like:    <response>      <result name="response" numFound="3" start="0">         <doc>            <long name="id">3203</long>            <str name="city">Amsterdam</str>            <str name="steet">Keizersgracht</str>            <int name="numberOfBathrooms">2</int>         </doc>         <doc>            <long name="id">3205</long>            <str name="city">Amsterdam</str>            <str name="steet">Vondelstraat</str>            <int name="numberOfBathrooms">3</int>         </doc>         <doc>            <long name="id">4293</long>            <str name="city">Amsterdam</str>            <str name="steet">Wibautstraat</str>            <int name="numberOfBathrooms">2</int>         </doc>      </result>      <lst name="facet_fields">         <lst name="numberOfBathrooms">            <int name="2">2</int>            <int name="3">1</int>         </lst>      </lst>   </response> Trying Solr for yourself To run Solr on your local machine and experiment with it, you should read the Solr tutorial. This tutorial really only takes 1 hour, in which you install Solr, upload sample data and get some query results. And yes, it works on Windows without a problem. Note that in the Solr tutorial, you're using Jetty as a Java Servlet Container (that's why you must start it using "java -jar start.jar"). In our environment we prefer to use Apache Tomcat to host Solr, which installs like a Windows service and works more like .NET developers expect. See the SolrTomcat page.Some best practices for running Solr on Windows: Use the 64-bits version of Tomcat. In our tests, this doubled the req/sec we were able to handle!Use a .NET XmlReader to convert Solr's XML output-stream to .NET objects. Don't use XPath; it won't scale well.Use filter queries ("fq" parameter) instead of the normal "q" parameter where possible. Filter queries are cached by Solr and will speed up Solr's response time (see FilterQueryGuidance)In my next post I’ll talk about how to keep Solr's indexed data in sync with the data in your SQL tables. Timestamps / rowversions will help you out here!

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #033

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Spatial Database Definition and Research Documents Here is the definition from Wikipedia about spatial database : A spatial database is a database that is optimized to store and query data related to objects in space, including points, lines and polygons. While typical databases can understand various numeric and character types of data, additional functionality needs to be added for databases to process spatial data types. Select Only Date Part From DateTime – Best Practice A very common question which I receive is how to only get Date or Time part from datetime value. In this blog post I explain the same in very simple words. T-SQL Paging Query Technique Comparison (OVER and ROW_NUMBER()) – CTE vs. Derived Table I have received few emails and comments about my post SQL SERVER – T-SQL Paging Query Technique Comparison – SQL 2000 vs SQL 2005. The main question was is this can be done using CTE? Absolutely! What about Performance? It is identical! Please refer above mentioned article for the history of paging. SQL SERVER – Cannot resolve collation conflict for equal to operation One of the very first error I ever encountered in my career was to resolve this conflict. I have blogged about it and I have realized that many others like me who are facing this error. LEN and DATALENGTH of NULL Simple Example Here is the question for you what is the LEN of NULL value? Well it is very easy – just read the blog. Recovery Models and Selection Very simple and easy explanation of the Database Backup Recovery Model and how to select the best option for you. Explanation SQL SERVER Hash Join Hash join gives best performance when two more join tables are joined and at-least one of them have no index or is not sorted. It is also expected that smaller of the either of table can be read in memory completely (though not necessary). Easy Sequence of SELECT FROM JOIN WHERE GROUP BY HAVING ORDER BY SELECT yourcolumns FROM tablenames JOIN tablenames WHERE condition GROUP BY yourcolumns HAVING aggregatecolumn condition ORDER BY yourcolumns NorthWind Database or AdventureWorks Database – Samples Databases In this blog post we learn how to install Northwind database. I also shared the source where one can download this database as that is used in many examples on MSDN help files. sp_HelpText for sp_HelpText – Puzzle A simple quick puzzle – do you know the answer of it? If not, go ahead and read the blog. 2008 SQL SERVER – 2008 – Step By Step Installation Guide With Images When SQL Server 2008 was newly introduced lots of people had no clue how to install SQL Server 2008 and the amount of the question which I used to receive were so much. I wrote this blog post with the spirit that this will help all the newbies to install SQL Server 2008 with the help of images. Still today this blog post has been bible for all of the people who are confused with SQL Server installation. Inline Variable Assignment I loved this feature. I have always wanted this feature to be present in SQL Server. The last time when I met developers from Microsoft SQL Server, I had talked about this feature. I think this feature saves some time but make the code more readable. Introduction to Policy Management – Enforcing Rules on SQL Server If our company policy is to create all the Stored Procedure with prefix ‘usp’ that developers should be just prevented to create Stored Procedure with any other prefix. Let us see a small tutorial how to create conditions and policy which will prevent any future SP to be created with any other prefix. 2009 Performance Counters from System Views – By Kevin Mckenna Many of you are not aware of this fact that access to performance information is readily available in SQL Server and that too without querying performance counters using a custom application or via perfmon. Till now, this fact has remained undisclosed but through this post I would like to explain you can easily access SQL Server performance counter information. Without putting much effort you will come across the system viewsys.dm_os_performance_counters. As the name suggests, this provides you easy access to the SQL Server performance counter information that is passed on to perfmon, but you can get at it via tsql. Customize Toolbar – Remove Debug Button from Toolbar I was fond of SQL Server Debugger feature in SQL Server 2000. To my utter disappointment, this feature was withdrawn from SQL Server 2005. The button of the debugger is similar to a play button and is used to run debugging commands of Visual Studio. Because of this reason, it gets very much infuriating for developers when they are developing on both – Visual Studio and SSMS. Let us now see how we can remove debugging button from SQL Server Management Studio. Effect of Normalization on Index and Performance A very interesting conversation which started from twitter. If you want to read one link this is the link I encourage you to read it. SSMS Feature – Multi-server Queries Using SQL Server Management Studio (SSMS) DBAs can now query multiple servers from one window. It is quite common for DBAs with large amount of servers to maintain and gather information from multiple SQL Servers and create report. This feature is a blessing for the DBAs, as they can now assemble all the information instantaneously without going anywhere. Query Optimizer Hint ROBUST PLAN – Question to You “ROBUST PLAN” is a kind of query hint which works quite differently than other hints. It does not improve join or force any indexes to use; it just makes sure that a query does not crash due to over the limit size of row. Let me elaborate upon it in the blog post. 2010 Do you really know the difference between various date functions available in SQL Server 2012? Here is a three part story where we explored the same with examples: Fastest Way to Restore the Database Difference Between DATETIME and DATETIME2 Difference Between DATETIME and DATETIME2 – WITH GETDATE Shrinking NDF and MDF Files – Readers’ Opinion Shrinking Database always creates performance degradation and increases fragmentation in the database. I suggest that you keep that in mind before you start reading the following comment. If you are going to say Shrinking Database is bad and evil, here I am saying it first and loud. Now, the comment of Imran is written while keeping in mind only the process showing how the Shrinking Database Operation works. Imran has already explained his understanding and requests further explanation. I have removed the Best Practices section from Imran’s comments, as there are a few corrections. 2011 Solution – Puzzle – SELECT * vs SELECT COUNT(*) This is very interesting question and I am very confident that not every one knows the answer to this question. Let me ask you again – Which will be faster SELECT* or SELECT COUNT (*) or do you think this is apples and oranges comparison. 2012 Service Broker and CAP_CPU_PERCENT – Limiting SQL Server Instances to CPU Usage In SQL Server 2012 there are a few enhancements with regards to SQL Server Resource Governor. One of the enhancement is how the resources are allocated. Let me explain you with examples. Let us understand the entire discussion with the help of three different examples. Finding Size of a Columnstore Index Using DMVs One of the very common question I often see is need of the list of columnstore index along with their size and corresponding table name. I quickly re-wrote a script using DMVs sys.indexes and sys.dm_db_partition_stats. This script gives the size of the columnstore index on disk only. I am sure there will be advanced script to retrieve details related to components associated with the columnstore index. However, I believe following script is sufficient to start getting an idea of columnstore index size. Developer Training Resources and Summary Roundup Developer Training - Importance and Significance - Part 1 In this part we discussed the importance of training in the real world. The most important and valuable resource any company is its employee. Employees who have been well-trained will be better at their jobs and produce a better product.  An employee who is well trained obviously knows more about their job and all the technical aspects. I have a very high opinion about training employees and it is the most important task. Developer Training – Employee Morals and Ethics – Part 2 In this part we discussed the most crucial components of training. Often employees are expecting the company to pay for their training and the company expresses no interest in training the employee. Quite often training expenses are the real issue for both the employee and employer. Developer Training – Difficult Questions and Alternative Perspective - Part 3 This part was the most difficult to write as I tried to address a few difficult questions and answers. Training is such a sensitive issue that many developers when not receiving chance for training think about leaving the organization. Developer Training – Various Options for Developer Training – Part 4 In this part I tried to explore a few methods and options for training. The generic feedback I received on this blog post was short and I should have explored each of the subject of the training in details. I believe there are two big buckets of training 1) Instructor Lead Training and 2) Self Lead Training. Developer Training – A Conclusive Summary- Part 5 There is no better motivation than a personal desire to learn new technology. Honestly there is nothing more personal learning. That “change is the only constant” and “adapt & overcome” are the essential lessons of life. One cannot stop the learning and resist the change. In the IT industry “ego of knowing all” and the “resistance to change” are the most challenging issues. A Quick Look at Logging and Ideas around Logging Question: What is the first thing comes to your mind when you hear the word “Logging”? Strange enough I got a different answer every single time. Let me just list what answer I got from my friends. Let us go over them one by one. Beginning Performance Tuning with SQL Server Execution Plan Solution of Puzzle – Swap Value of Column Without Case Statement Earlier this week I asked a question where I asked how to Swap Values of the column without using CASE Statement. Read here: SQL SERVER – A Puzzle – Swap Value of Column Without Case Statement. I have proposed 3 different solutions in the blog posts itself. I had requested the help of the community to come up with alternate solutions and honestly I am stunned and amazed by the qualified entries. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Part 2: Career development as a Software Developer without becoming a manager.

    - by albertpascual
    Seems like my previous post inspired by the work of Michael “Doc” Norton was a great success for the amount of emails I have received. Yet amazed how many people didn’t want to discuss their questions in the comments  sections. I would encourage people to be more public, still I would like to reply to all of you on this public media. I still welcome those emails. What I found out is that many people feels like me, they want to be developers and still be compensated for their experience without wanting to take a job as a manager. Their perfect day is a full day of coding and learning. Many believe their companies will never pay a manager’s salary to a developer no matter what. Most of you ask how to get the ball rolling. And is the later that I’m addressing here, the previous group, will never try. What companies understand developers value and where can I find them? This is a very difficult question to ask, I don’t have a list of those companies or departments, I have seen in my past signs in companies bending backwards to compensate, in more ways the monetary, a developer that is a good resource to them. Allowing the person to move out of the state and still let them work for the company from home is a sign that company goes by individual cases. Allowing them to go to conference that will not benefit the company is another big sign. Simple signs like flexible hours and letting some people work from home. To see those signs you need to be working in that company for awhile and look at the departments where the manager is taking care of their employees in individual cases. Look for the department where people get quiet extra perks, where some people in the department work from home or remotely. In my experience, but not always true, medium to big companies, are prompt to recognize good developers. Then again, some companies just don’t get it and is when you see many technical people managing developers. For all the people that email me stating that developers can also be very good managers, I do not disagree, I just think that a good developers loves writing code, when you remove that part the better salary isn’t enough to keep a developer happy. Burned out developers appreciate being promoted to managers. How do I know I work in a bad company? In my experience I have been a consultant and seen many companies, a few signs I have learned about companies that will not recognize good developers are: When the turn over is pretty high, when developers are moving out in a big rate, no rocket scientist needs to tap you in the shoulder. When the company is looking always to outsource their development resources. The product is not that interesting nor the company cares too much for their final result and support. Code sweat shops. You’ll know when you start working in one of those. Run for the hills! Where do I start? Disclaimer: I have only based this post on Michael “Doc” Norton, this is just my interpretation and ideas. First thing is to look at Michael “Doc” Norton presentation Take Control of Your Development Career http://docondev.blogspot.com/ That should be the first thing any developer should look and follow like it was a pattern. I would personally recommend to find some language or pattern you are interested with and learn it, learn something that will make you happy. Second, join a User Group and get involve in the community. There are hundreds of user groups, and I’m sure you’ll find one in your city or near you town. Code Camps are Developers Meet Ups are also good resources. Third, I would join a open source project you are interested or better yet, create a new open source project with the new technology that you have learn and get coding. Fourth, create a Twitter account and follow the people that talks about the technology you are interested on. If you follow this 4 steps above I think you’ll be on your way, after they are complete, when you release your Open Source project you can say that you accomplished the first steps. Now, do not expect anything to change in your career life, you are changing and should not expect anything in return, besides borrowing some time from sleeping and your family. Creating a good schedule may help you, I find wasted time in many places that I use. Flying for work is actually one of those that allows me to do my best work on a airplane, don’t need to borrow time from anywhere else. Making sure you always have a light, charged laptop is so important. Next steps following the Michael “Doc” Norton Pattern or my interpretation of. First, help run a user group or better yet, start a new user group. I’ll add, as well, go to one conference a year and free development events around your city; Code Camps, Geek Dinners, etc. There are many free events sponsored by different companies for developers to get to know their products, I highly recommend those as the way to get connected. Second, chose a mentor, this is a very hard thing to do I experienced, find an expert in the technology you are learning that has the time for you, it is difficult, I wish you best of luck. Third, learn another technology or pattern, open your horizons a little bit more. Why not, if you had fun previously, keep doing it. Fourth, get involved in forums to answer and ask questions, getting notice in public forums is rewarding for your ego after such a long journey. Final steps following the Michael “Doc” Norton Pattern Teach what you know, become humble on your knowledge, find as many opportunities to teach and to get involved with the community, bring all that to your day job. Mr. Norton talks about getting naked, expose yourself to others in your knowledge and what you do not know. You are never too important for small opportunities, yet don’t  be afraid to take anything big and learn from the experience. Anytime you have the opportunity to talk to somebody that has reach the point the community knows his or her name, means that you should learn from it. Take opportunities that won’t make you money, yet will make you happy. Sometimes you need to spend money and time. Register talks in Code Camps and Dev Meet Ups, those are free, also go to Conference, Development Summits and Geek Diners for example. One day, people will pay you to attend. When will all these pay off? I don’t know. I’m still in the path, there are a few things that during your journey you may get little acknowledgements that you are in the correct path. In my case I think those are the little signs that tells you about your journey. I got awarded the Microsoft Most Valuable Professional for ASP.NET in 2007, 2008, 2009 and 2010. I got selected to speak at the DevConnections in Las Vegas in 2010 and Orlando 2011. I do believe that I do have a long way to go, yet what I do makes me happy and I hope I can keep doing for years to come. Every year I can see an improvement on my code, and more frameworks and languages are under my belt, I learn to embrace them all as well as in my daily job, I have been able to work in a few projects beyond my department. I’m a learner and believer of the Michael “Doc” Norton pattern. Looking forward to learn more about it to be able to apply it better. In my short journey I now see my mistakes, I did a few things right, I have been listening the intelligent people and not being afraid to move along the technology changes. In my professional life, I have tried to avoid being placed in only one technology and product. I have always share my code and never confused anybody that wanted to take over any of my projects, I didn’t think anything I created as my own nor care too much when politics didn’t see my vision. I stayed flexible, ready and visible, yet humble. I keep my head just below the clouds, and avoided managers meetings. I credit my manager for my success, and I faulted publicly only myself for the failures. Hope this helps. Cheers, Al Follow me in Twitter  Read my previous post tweetmeme_url = 'http://weblogs.asp.net/albertpascual/archive/2010/12/09/part-2-career-development-as-a-software-developer-without-becoming-a-manager.aspx'; tweetmeme_source = 'alpascual';

    Read the article

  • Book Review - Programming Windows Azure by Siriram Krishnan

    - by BuckWoody
    As part of my professional development, I’ve created a list of books to read throughout the year, starting in June of 2011. This a review of the first one, called Programming Windows Azure by Siriram Krishnan. You can find my entire list of books I’m reading for my career here: http://blogs.msdn.com/b/buckwoody/archive/2011/06/07/head-in-the-clouds-eyes-on-the-books.aspx  Why I Chose This Book: As part of my learning style, I try to read multiple books about a single subject. I’ve found that at least 3 books are necessary to get the right amount of information to me. This is a “technical” work, meaning that it deals with technology and not business, writing or other facets of my career. I’ll have a mix of all of those as I read along. I chose this work in addition to others I’ve read since it covers everything from an introduction to more advanced topics in a single book. It also has some practical examples of actually working with the product, particularly on storage. Although it’s dated, many examples normally translate. I also saw that it had pretty good reviews. What I learned: I learned a great deal about storage, and many useful code snippets. I do think that there could have been more of a focus on the application fabric - but of course that wasn’t as mature a feature when this book was written. I learned some great architecture examples, and in one section I learned more about encryption. In that example, however, I would rather have seen the examples go the other way - the book focused on moving data from on-premise to Azure storage in an encrypted fashion. Using the Application Fabric I would rather see sensitive data left in a hybrid fashion on premise, and connect to for the Azure application. Even so, the examples were very useful. If you’re looking for a good “starter” Azure book, this is a good choice. I also recommend the last chapter as a quick read for a DBA, or Database Administrator. It’s not very long, but useful. Note that the limits described are incorrect - which is one of the dangers of reading a book about any cloud offering. The services offered are updated so quickly that the information is in constant danger of being “stale”. Even so, I found this a useful book, which I believe will help me work with Azure better. Raw Notes: I take notes as I read, calling that process “reading with a pencil”. I find that when I do that I pay attention better, and record some things that I need to know later. I’ll take these notes, categorize them into a OneNote notebook that I synchronize in my Live.com account, and that way I can search them from anywhere. I can even read them on the web, since the Live.com has a OneNote program built in. Note that these are the raw notes, so they might not make a lot of sense out of context - I include them here so you can watch my though process. Programming Windows Azure by Siriram Krishnan: Learning about how to select applications suitable for Distributed Technology. Application Fabric gets the least attention; probably because it was newer at the time. Very clear (Chapter One) Good foundation Background and history, but not too much I normally arrange my descriptions differently, starting with the use-cases and moving to physicality, but this difference helps me. Interesting that I am reading this using Safari Books Online, which uses many of these concepts. Taught me some new aspects of a Hypervisor – very low-level information about the Azure Fabric (not to be confused with the Application Fabric feature) (Chapter Two) Good detail of what is included in the SDK. Even more is available now. CS = Cloud Service (Chapter 3) Place Storage info in the configuration file, since it can be streamed in-line with a running app. Ditto for logging, and keep separated configs for staging and testing. Easy-switch in and switch out.  (Chapter 4) There are two Runtime API’s, one of external and one for internal. Realizing how powerful this paradigm really is. Some places seem light, and to drop off but perhaps that’s best. Managing API is not charged, which is nice. I don’t often think about the price, until it comes to an actual deployment (Chapter 5) Csmanage is something I want to dig into deeper. API requires package moves to Blob storage first, so it needs a URL. Csmanage equivalent can be written in Unix scripting using openssl. Upgrades are possible, and you use the upgradeDomainCount attribute in the Service-Definition.csdef file  Always use a low-privileged account to test on the dev fabric, since Windows Azure runs in partial trust. Full trust is available, but can be dangerous and must be well-thought out. (Chapter 6) Learned how to run full CMD commands in a web window – not that you would ever do that, but it was an interesting view into those links. This leads to a discussion on hosting other runtimes (such as Java or PHP) in Windows Azure. I got an expanded view on this process, although this is where the book shows its age a little. Books can be a problem for Cloud Computing for this reason – things just change too quickly. Windows Azure storage is not eventually consistent – it is instantly consistent with multi-phase commit. Plumbing for this is internal, not required to code that. (Chapter 7) REST API makes the service interoperable, hybrid, and consistent across code architectures. Nicely done. Use affinity groups to keep data and code together. Side note: e-book readers need a common “notes” feature. There’s a decent quick description of REST in this chapter. Learned about CloudDrive code – PowerShell sample that mounts Blob storage as a local provider. Works against Dev fabric by default, can be switched to Account. Good treatment in the storage chapters on the differences between using Dev storage and Azure storage. These can be mitigated. No, blobs are not of any size or number. Not a good statement (Chapter 8) Blob storage is probably Azure’s closest play to Infrastructure as a Service (Iaas). Blob change operations must be authenticated, even when public. Chapters on storage are pretty in-depth. Queue Messages are base-64 encoded (Chapter 9) The visibility timeout ensures processing of message in a disconnected system. Order is not guaranteed for a message, so if you need that set an increasing number in the queue mechanism. While Queues are accessible via REST, they are not public and are secured by default. Interesting – the header for a queue request includes an estimated count. This can be useful to create more worker roles in a dynamic system. Each Entity (row) in the Azure Table service is atomic – all or nothing. (Chapter 10) An entity can have up to 255 Properties  Use “ID” for the class to indicate the key value, or use the [DataServiceKey] Attribute.  LINQ makes working with the Azure Table Service much easier, although Interop is certainly possible. Good description on the process of selecting the Partition and Row Key.  When checking for continuation tokens for pagination, include logic that falls out of the check in case you are at the last page.  On deleting a storage object, it is instantly unavailable, however a background process is dispatched to perform the physical deletion. So if you want to re-create a storage object with the same name, add retry logic into the code. Interesting approach to deleting an index entity without having to read it first – create a local entity with the same keys and apply it to the Azure system regardless of change-state.  Although the “Indexes” description is a little vague, it’s interesting to see a Folding and Stemming discussion a-la the Porter Stemming Algorithm. (Chapter 11)  Presents a better discussion of indexes (at least inverted indexes) later in the chapter. Great treatment for DBA’s in Chapter 11. We need to work on getting secondary indexes in Table storage. There is a limited form of transactions called “Entity Group Transactions” that, although they have conditions, makes a transactional system more possible. Concurrency also becomes an issue, but is handled well if you’re using Data Services in .NET. It watches the Etag and allows you to take action appropriately. I do not recommend using Azure as a location for secure backups. In fact, I would rather have seen the examples in (Chapter 12) go the other way, showing how data could be brought back to a local store as a DR or HA strategy. Good information on cryptography and so on even so. Chapter seems out of place, and should be combined with the Blob chapter.  (Chapter 13) on SQL Azure is dated, although the base concepts are OK.  Nice example of simple ADO.NET access to a SQL Azure (or any SQL Server Really) database.  

    Read the article

  • VLC 2.0.3 on Lubuntu 12.04: No audio?

    - by drezabek
    I am on Lubuntu 12.04, and I have installed VLC media player version 2.0.3. When I try and play an audio file, it appears to load fine, and the media position bar displays the progress, and it says it is playing, but I can't here any thing through my speakers. I can hear game audio, web audio, and audio from SMPlayer just fine, but with VLC, I can't here anything. Below is the "Messages" output with the verbosity option set to "2 (debug)" main debug: processing request item: The Bottom, node: Playlist, skip: 0 main debug: resyncing on The Bottom main debug: The Bottom is at 0 main debug: starting playback of the new playlist item main debug: resyncing on The Bottom main debug: The Bottom is at 0 main debug: creating new input thread main debug: Creating an input for 'The Bottom' main debug: TIMER input launching for 'Floex - Machinarium Soundtrack - 01 The Bottom.flac' : 23.706 ms - Total 23.706 ms / 1 intvls (Avg 23.706 ms) main debug: using timeshift granularity of 50 MiB, in path '/tmp' main debug: `file:///home/doug/Music/unsorted/Floex%20-%20Machinarium%20Soundtrack/Floex%20-%20Machinarium%20Soundtrack%20-%2001%20The%20Bottom.flac' gives access `file' demux `' path `/home/doug/Music/unsorted/Floex%20-%20Machinarium%20Soundtrack/Floex%20-%20Machinarium%20Soundtrack%20-%2001%20The%20Bottom.flac' main debug: creating demux: access='file' demux='' location='/home/doug/Music/unsorted/Floex%20-%20Machinarium%20Soundtrack/Floex%20-%20Machinarium%20Soundtrack%20-%2001%20The%20Bottom.flac' file='/home/doug/Music/unsorted/Floex - Machinarium Soundtrack/Floex - Machinarium Soundtrack - 01 The Bottom.flac' main debug: looking for access_demux module: 3 candidates main debug: no access_demux module matching "file" could be loaded main debug: TIMER module_need() : 2.332 ms - Total 2.332 ms / 1 intvls (Avg 2.332 ms) main debug: creating access 'file' location='/home/doug/Music/unsorted/Floex%20-%20Machinarium%20Soundtrack/Floex%20-%20Machinarium%20Soundtrack%20-%2001%20The%20Bottom.flac', path='/home/doug/Music/unsorted/Floex - Machinarium Soundtrack/Floex - Machinarium Soundtrack - 01 The Bottom.flac' main debug: looking for access module: 2 candidates filesystem debug: opening file `/home/doug/Music/unsorted/Floex - Machinarium Soundtrack/Floex - Machinarium Soundtrack - 01 The Bottom.flac' main debug: using access module "filesystem" main debug: TIMER module_need() : 0.762 ms - Total 0.762 ms / 1 intvls (Avg 0.762 ms) main debug: Using stream method for AStream* main debug: starting pre-buffering main debug: received first data after 0 ms main debug: pre-buffering done 1024 bytes in 0s - 43478 KiB/s main debug: looking for stream_filter module: 7 candidates main debug: no stream_filter module matching "any" could be loaded main debug: TIMER module_need() : 0.236 ms - Total 0.236 ms / 1 intvls (Avg 0.236 ms) main debug: looking for stream_filter module: 1 candidate main debug: using stream_filter module "stream_filter_record" main debug: TIMER module_need() : 0.156 ms - Total 0.156 ms / 1 intvls (Avg 0.156 ms) main debug: creating demux: access='file' demux='' location='/home/doug/Music/unsorted/Floex%20-%20Machinarium%20Soundtrack/Floex%20-%20Machinarium%20Soundtrack%20-%2001%20The%20Bottom.flac' file='/home/doug/Music/unsorted/Floex - Machinarium Soundtrack/Floex - Machinarium Soundtrack - 01 The Bottom.flac' main debug: looking for demux module: 54 candidates flacsys debug: Picture type=3 mime=image/png description='' file length=679371 qt4 debug: IM: Setting an input main debug: looking for packetizer module: 21 candidates main debug: using packetizer module "packetizer_flac" main debug: TIMER module_need() : 0.211 ms - Total 0.211 ms / 1 intvls (Avg 0.211 ms) main debug: using demux module "flacsys" main debug: TIMER module_need() : 4.023 ms - Total 4.023 ms / 1 intvls (Avg 4.023 ms) main debug: looking for a subtitle file in /home/doug/Music/unsorted/Floex - Machinarium Soundtrack/ main debug: looking for meta reader module: 2 candidates main debug: using meta reader module "taglib" main debug: TIMER module_need() : 5.245 ms - Total 5.245 ms / 1 intvls (Avg 5.245 ms) main debug: removing module "taglib" main debug: `file:///home/doug/Music/unsorted/Floex%20-%20Machinarium%20Soundtrack/Floex%20-%20Machinarium%20Soundtrack%20-%2001%20The%20Bottom.flac' successfully opened main debug: selecting program id=0 main debug: looking for decoder module: 30 candidates main debug: using decoder module "flac" main debug: TIMER module_need() : 0.442 ms - Total 0.442 ms / 1 intvls (Avg 0.442 ms) main debug: Buffering 0% flac debug: decode STREAMINFO flac debug: channels:2 samplerate:44100 bitspersamples:16 flac debug: STREAMINFO decoded main debug: Buffering 30% main debug: recycling audio output main debug: looking for audio output module: 3 candidates main debug: Buffering 61% pulse debug: using stereo channel map pulse debug: using library version 1.1.0 pulse debug: (compiled with version 1.1.0, protocol 26) main debug: Buffering 92% main debug: Stream buffering done (371 ms in 2 ms) pulse debug: connected locally to unix:/home/doug/.pulse/dce22254e867f905188a2ce200000003-runtime/native as client #14 pulse debug: using protocol 26, server protocol 26 pulse debug: using buffer metrics: maxlength=4194304, tlength=9880, prebuf=0, minreq=3528 pulse debug: connected to sink 0: alsa_output.pci-0000_00_14.2.analog-stereo main debug: using audio output module "pulse" main debug: TIMER module_need() : 4.571 ms - Total 4.571 ms / 1 intvls (Avg 4.571 ms) main debug: output 's16l' 44100 Hz Stereo frame=1 samples/4 bytes main debug: mixer 'f32l' 44100 Hz Stereo frame=1 samples/8 bytes main debug: filter(s) 'f32l'->'s16l' 44100 Hz->44100 Hz Stereo->Stereo main debug: looking for audio filter module: 14 candidates audio_format debug: f32l->s16l, bits per sample: 32->16 main debug: using audio filter module "audio_format" main debug: TIMER module_need() : 0.187 ms - Total 0.187 ms / 1 intvls (Avg 0.187 ms) main debug: conversion pipeline completed main debug: looking for audio mixer module: 2 candidates main debug: using audio mixer module "float32_mixer" main debug: TIMER module_need() : 0.125 ms - Total 0.125 ms / 1 intvls (Avg 0.125 ms) main debug: input 's16l' 44100 Hz Stereo frame=1 samples/4 bytes main debug: looking for audio filter module: 1 candidate scaletempo debug: format: 44100 rate, 2 nch, 4 bps, fl32 scaletempo debug: params: 30 stride, 0.200 overlap, 14 search scaletempo debug: 1.000 scale, 1323.000 stride_in, 1323 stride_out, 1059 standing, 264 overlap, 617 search, 2204 queue, fl32 mode main debug: using audio filter module "scaletempo" main debug: TIMER module_need() : 0.233 ms - Total 0.233 ms / 1 intvls (Avg 0.233 ms) main debug: filter(s) 's16l'->'f32l' 44100 Hz->44100 Hz Stereo->Stereo pulse debug: listing sink alsa_output.pci-0000_00_14.2.analog-stereo (0): Built-in Audio Analog Stereo main debug: looking for audio filter module: 14 candidates audio_format debug: s16l->f32l, bits per sample: 16->32 main debug: using audio filter module "audio_format" main debug: TIMER module_need() : 0.147 ms - Total 0.147 ms / 1 intvls (Avg 0.147 ms) main debug: conversion pipeline completed pulse debug: base volume: 65536 main debug: looking for audio filter module: 1 candidate equalizer debug: equalizer loaded for 44100 Hz with 10 bands 2 pass equalizer debug: 60 Hz -> factor:0.000000 alpha:0.003013 beta:0.993973 gamma:1.993901 equalizer debug: 170 Hz -> factor:0.000000 alpha:0.008490 beta:0.983019 gamma:1.982437 equalizer debug: 310 Hz -> factor:0.000000 alpha:0.015374 beta:0.969252 gamma:1.967331 equalizer debug: 600 Hz -> factor:0.000000 alpha:0.029328 beta:0.941343 gamma:1.934254 equalizer debug: 1000 Hz -> factor:0.000000 alpha:0.047918 beta:0.904163 gamma:1.884869 equalizer debug: 3000 Hz -> factor:0.000000 alpha:0.130408 beta:0.739184 gamma:1.582718 equalizer debug: 6000 Hz -> factor:0.000000 alpha:0.226555 beta:0.546889 gamma:1.015267 equalizer debug: 12000 Hz -> factor:0.000000 alpha:0.344937 beta:0.310127 gamma:-0.181410 equalizer debug: 14000 Hz -> factor:0.000000 alpha:0.366438 beta:0.267123 gamma:-0.521151 equalizer debug: 16000 Hz -> factor:0.000000 alpha:0.379009 beta:0.241981 gamma:-0.808451 main debug: using audio filter module "equalizer" main debug: TIMER module_need() : 0.353 ms - Total 0.353 ms / 1 intvls (Avg 0.353 ms) main debug: filter(s) 'f32l'->'f32l' 44100 Hz->44100 Hz Stereo->Stereo main debug: conversion pipeline completed main debug: looking for visualization2 module: 1 candidate main debug: looking for text renderer module: 2 candidates freetype debug: Building font databases. freetype debug: Took 0 microseconds freetype debug: Using Serif Bold as font from file /usr/share/fonts/truetype/ttf-dejavu/DejaVuSans.ttf freetype debug: using fontsize: 2 main debug: using text renderer module "freetype" main debug: TIMER module_need() : 3.278 ms - Total 3.278 ms / 1 intvls (Avg 3.278 ms) main debug: looking for video filter2 module: 18 candidates swscale debug: 32x32 chroma: YUVA -> 16x16 chroma: RGBA with scaling using Bicubic (good quality) main debug: using video filter2 module "swscale" main debug: TIMER module_need() : 1.037 ms - Total 1.037 ms / 1 intvls (Avg 1.037 ms) main debug: looking for video filter2 module: 18 candidates yuvp debug: YUVP to YUVA converter main debug: using video filter2 module "yuvp" main debug: TIMER module_need() : 0.156 ms - Total 0.156 ms / 1 intvls (Avg 0.156 ms) main debug: Deinterlacing available main debug: deinterlace 0, mode blend, is_needed 0 main debug: Opening vout display wrapper main debug: looking for vout display module: 6 candidates main debug: looking for vout window xid module: 4 candidates qt4 debug: requesting video... qt4 debug: Video was requested 0, 0 main debug: using vout window xid module "qt4" main debug: TIMER module_need() : 61.671 ms - Total 61.671 ms / 1 intvls (Avg 61.671 ms) main debug: looking for inhibit module: 2 candidates main debug: using inhibit module "xdg_screensaver" main debug: TIMER module_need() : 0.336 ms - Total 0.336 ms / 1 intvls (Avg 0.336 ms) xdg_screensaver debug: started xdg-screensaver (PID = 6682) xcb_xv debug: connected to X11.0 server xcb_xv debug: vendor : The X.Org Foundation xcb_xv debug: version: 11103000 xcb_xv debug: using screen 0x15a xcb_xv debug: using XVideo extension v2.2 xcb_xv debug: using adaptor NV17 Video Texture xcb_xv debug: using port 310 xcb_xv debug: using image format 0x30323449 xcb_xv debug: using X11 visual ID 0x21 (depth: 24) xcb_xv debug: using X11 window 0x03400000 xcb_xv debug: using X11 graphic context 0x03400002 main debug: VoutDisplayEvent 'fullscreen' 0 main debug: VoutDisplayEvent 'resize' 800x500 window main debug: using vout display module "xcb_xv" main debug: TIMER module_need() : 69.890 ms - Total 69.890 ms / 1 intvls (Avg 69.890 ms) main debug: original format sz 800x500, of (0,0), vsz 800x500, 4cc I420, sar 1:1, msk r0x0 g0x0 b0x0 main debug: removing module "freetype" main debug: looking for text renderer module: 2 candidates freetype debug: Building font databases. freetype debug: Took 0 microseconds freetype debug: Using Serif Bold as font from file /usr/share/fonts/truetype/ttf-dejavu/DejaVuSans.ttf freetype debug: using fontsize: 2 main debug: using text renderer module "freetype" main debug: TIMER module_need() : 4.552 ms - Total 4.552 ms / 1 intvls (Avg 4.552 ms) main debug: using visualization2 module "visual" main debug: TIMER module_need() : 84.104 ms - Total 84.104 ms / 1 intvls (Avg 84.104 ms) main debug: filter(s) 'f32l'->'f32l' 44100 Hz->44100 Hz Stereo->Stereo main debug: conversion pipeline completed main debug: filter(s) 'f32l'->'f32l' 44100 Hz->44100 Hz Stereo->Stereo main debug: conversion pipeline completed main debug: filter(s) 'f32l'->'f32l' 48510 Hz->44100 Hz Stereo->Stereo main debug: looking for audio filter module: 14 candidates main debug: using audio filter module "samplerate" main debug: TIMER module_need() : 0.375 ms - Total 0.375 ms / 1 intvls (Avg 0.375 ms) main debug: conversion pipeline completed main debug: End of audio preroll main debug: Decoder buffering done in 91 ms main warning: PTS is out of range (-9269), dropping buffer pulse debug: deferring start (190703 us) main debug: looking for video blending module: 1 candidate main debug: using video blending module "blend" main debug: TIMER module_need() : 0.275 ms - Total 0.275 ms / 1 intvls (Avg 0.275 ms) main debug: Detected interlaced video main debug: deinterlace 0, mode blend, is_needed 1 xcb_xv debug: display is visible pulse debug: starting deferred pulse warning: too late by 93760 us pulse debug: changed sample rate to 44186 Hz pulse debug: started pulse warning: too late by 94474 us pulse debug: changed sample rate to 44229 Hz pulse warning: too late by 93532 us pulse debug: changed sample rate to 44272 Hz pulse warning: too late by 92829 us pulse debug: changed sample rate to 44315 Hz pulse warning: too late by 92132 us pulse debug: changed sample rate to 44358 Hz xcb_xv debug: display is visible pulse warning: too late by 91534 us pulse debug: changed sample rate to 44401 Hz xcb_xv debug: display is visible pulse warning: too late by 89482 us pulse debug: changed sample rate to 44440 Hz xcb_xv debug: display is visible xcb_xv debug: display is visible pulse warning: too late by 87529 us pulse debug: changed sample rate to 44479 Hz pulse warning: too late by 84577 us pulse debug: changed sample rate to 44504 Hz main debug: auto hiding mouse cursor pulse warning: too late by 78562 us pulse debug: changed sample rate to 44492 Hz pulse warning: too late by 68015 us pulse debug: changed sample rate to 44422 Hz xcb_xv debug: display is visible xcb_xv debug: display is visible xcb_xv debug: display is visible xcb_xv debug: display is visible main debug: auto hiding mouse cursor pulse debug: changed sample rate to 44336 Hz xcb_xv debug: display is visible xcb_xv debug: display is visible xcb_xv debug: display is visible main debug: auto hiding mouse cursor I have had issues with VLC in the past- the audio quality was extremely crackly, as if the headphone jack was plugged in only half way, and the sounds were extremely sharp and caused my speakers to make a ringing/vibrating noise... It would eventually start working after I messed around with the audio settings, but it happened every restart. I eventually switched to SMPlayer, but now I need some of the features that VLC offers, but I still can't use VLC. At this point, the audio can not be heard at all, and the method I used before, messing around with the audio settings, isn't getting me anywhere. (note, I reposted this on VideoLan's forums, link is here: http://forum.videolan.org/viewtopic.php?f=13&t=104726) Please let me know if you need more information, or are confused by something I posted! Thanks!

    Read the article

  • Why version of chrome does not matter much more then firefox and firefox does not matter much as IE

    - by anirudha
    Everything not perfect. in software the software make and growth by user feedback like what user expected from the software and want in next version of software. In a chrome Event i hear about the Chromium. you can find some interesting things here Video 1 Video 2 come to the point. when i hear about some good website of india. many of them talking a little thing in common that. We are #1 because we not thing that we make a great application and deploy them and think that we finished own works preharps in a small days we make a small website deploy them and improve them always latter. what the point they all talking about:- the conclusion is that software make by user feedback. they tell that he not spent much time and wait for a long time when their project was finish and they launch their website. preharps they tell that they make a small website in a small time and launched them. make a research on them later and make them better later and website growth as they thing. if they are late then someone else can win even their project was much good them other. not more but a little story:-  before few month i hear about a great website who sold many of books daily i myself purchase some from them to track how they work and how they provided service. i not found any problem with their service. the service they provided is good but when i see their website i found that the mockup code was very badly designed. i am not know the matter how they growth because they used very other stuff who make their website slow. when i research something more i found that their is very hard to implement the website look like them. on their blog they writing about a mail they have. the clone of them make by many other but not goes good as well as they make. after few month later website is looking great. many thing they improved and make them better as  other thing. a another conclusion that same as another story that user feedback. well now come to the point. we talking about Chrome,firefox and IE. what thing is goes common that they all are browser. but something goes different that Chrome is a one of the best browser. from a month many of issue submitted to chrome that user found when they use them. so what is make this different the different is that when feedback goes to someone they take a action and think to make them better so improvement of chrome based on feedback user put using many things. secondly because it's goes open-source many of developer contribute them and make them real browser not real [tape] browser as like IE [a good example]. as you see in video they talking about silent update in chrome and futurecoming chromium. the thing they implement is too good. because by this thing user not worry about a new version. i myself never find a problem that you need to user new version as we found same problem in other application. Well think are great in chrome and now talking about Firefox. Firefox is a best option for development as well as chrome best for surfing the internet. in firefox many thing are great like plugin [ex: Firebug] , addons personas themes and many other thing and customization in firefox make them really a browser not like a joker [IE a good example]. well now come to IE. are IE really great no. someone from Microsoft can say that ha ha hi hi because they can't see the power of open-source. they thing that they make a software and they never need user feedback because they produced windows who really great for user because they used them. example :- before few month Microsoft shipped Windows live. when i use them that i found that their is no sense make for using this one software. suppose you need to write a post through Live writer. the old version are great i myself have no problem but in 2011 i found that they changed everything in user interface. so learn a new thing and spent sometime more to learn a new version whenever need are same and feature are same so why user spent a little time more to learn a lesson who they want to teach even their is no sense to learn them. the problem in 2011 Live not only of mine their are many other have same problem as mine and forget live 2011 after the see a badly design user interface. even they tell we maked in WPF yeah yeah WPF we make in .net. are you can say that what is the matter .net for user. the user have no problem to use WPF based application even you make them fool as we make them in WPF 2020 they are future technologies and we launch it 10 year before only for you yeah you dear customer of mine. yeah they thing WPF is best and thing to implement every software they make even they forget to make better user interface but they also remember to make them next version in WPF. the IE 9 Rc release on 10 febuary. but are they really cool. how much feedback they take and take action of them. their is no answer because they thing to launch a software they never thing what user want and off-course not care of user feedback. as we mention in Firefox and in chrome user feedback have a big matter because sound come from a public and user who use the software not only who make them software as IE 9 have. so feedback take a opportunities to make their software better and less hassel to use them in user hands not only in developer hands. so IE9 is not a good guys who still need of user if they really want a experience. well what Microsoft implemented in IE. i am not talking about that furthure more but i found in article last days[why not say reading a google blog]  yeah see them in http://googleblog.blogspot.com/2011/02/microsofts-bing-uses-google-search.html Well their is nothing good for developer in IE9. the blah blah blah they can always said on MSDN and many other site they have. many from public talking about them because they never can see a good software outside Microsoft. they never talking about Firebug even in books they never show you that. well i know competitor never show you a stuff of competitor i have same issue from Yahoo. on a days i hear from newsletter from them they write a subline on the bottom that USE IE or Firefox to exerience better Web. i am agree with Firefox and i am not know they really talking about IE or joking but i never believe they forget to put chrome. well i know their is corporate rule everyone should follow first. so no problem yahoo i know the matter. well IE:- so what is IE and Why We should use IE. well their is no sense to use IE. the thing we expect from IE but never found that:- first thing is that as a developer we thing the customization as well as other browser have like in chrome have it's own customization and firefox is also great in this matter. but IE really for Web development. are you joking:- the thing they mention in their blog is that IE9 have a new developer tool who have three new panel or tabs. are this joke whenever Firefox and chrome have everyday a new plugin or great upgrade of old plugin they tell we add three new panel first is network second is blah third is blah. well nice joke you make all MSDN blogger i like the way you talking about IE.  even we know what matter the browser have. i thing whenever they make IE 6 they talking about IE as same as they talking today. Secondly their is no other tool to use with IE deveoper tool like Firebug is avilable in IE but not make by IE. firebug team themselves make them for IE. because many of developer thing to use firebug but can't use because they still goes mad about IE because day and night they only hear about tools maked by Microsoft. so no plugin [even very small developer tool] no customized no personas on themse. no update yeah why forget these topic come with us and share a little thing more. IE launch IE 6 after 7 after 8 and now 9 [even in future] but what they do. they do nothing on user feedback they still thing WPF is great because colors make user cool and they forget to implement other things as other already provide. Chrome and Firefox are come after IE. Mozilla firefox come in 2004 and chrome is late in 2008. even they are late they still focus on Developer and thing they feel first is that customization like developer tool , themese and perfsonas and many other great things. are they can find in IE even next i means 10 yeah IE10 never because they thing only making a software or force user to use new version of OS. i am confused that why not wait and force user to purchase windows 8 instead of 7. so IE have no customization even small developer tool i thing that they make a customizable interface like in firefox who configure by about:config. so thing is discussed about really not a point we thing to goes but now it's clear what is making no matter for version in Firefox and chrome. because chrome and firefox not wait for  a long time and explode a bomb to make publicity. they still work and make upgrade possible to user as soon as possible. [chrome never tell about they goes old they himself update them].so update comes soon in Firefox and in chrome but in IE their is a long time to wait and they make them without feedback. so IE really not for human and not really for us. whenver you found a bug in chrome and in firefox you report them and found that they are work in progressed and can be see in next version of firefox. but what you see whenever you see IE. you found that what the bug can found in IE whenver they not implemented same feature in IE. well IE 9 is next IE6 for developer. conclusion:-  after reading a whole post you find that i hate all thing about IE. why are i write a big post on a small pity software IE. why i open the poll of IE. are their anything in IE break my heart. are their is something goes wrong with me and with my IE9. are their is anything i got with IE9. why i write a big post. well as a developer play a trick that give sometime to chrome to make them better and some other to make firefox better and feel something you contribute really have a matter as a contribute you find some other and their thought on same software. some are great maybe some of them blah blah. but are their is true that outside Microsoft their is no good sollution can make because it's outside Microsoft. their is not true. the thing developer make not have matter even using Microsoft technologies or outside technologies of MS. so stop this i not want to talking some other things just stop it. i means their is no more blah i want to talking with you for IE.i still hate them and believe it is next IE6 for Web. Answers: if you still need a answer in lines that the answer is that IE late update as long as they can and also make force user to upgrade IE9 because they want to promote windows first then thing about IE and chrome and firefox not do that as same as IE. so IE is late and user forced software. in firefox and chrome upgrade come soon as soon as they possible. Thanks to give me a great time and red my blah on Blah i means IE9 Thanks again Anirudha

    Read the article

  • FMw Diagnostic Framework : Automatic Capture of Diagnostic Data Upon First Failure!

    - by Daniel Mortimer
    Introduction There is nothing more frustrating than a problem that "cannot be reproduced". Logs, configuration files have been analysed but there just isn't enough information to establish the root cause. The issue maybe closed, but you are left with the feeling that the problem will raise its ugly head again in the future. Trouble is, to resolve such issues you need to capture diagnostic data at the exact time the incident occurs. Step forward Fusion Middleware Diagnostic Framework!  Diagnostic Framework monitors WebLogic Managed Servers and delivers "Automatic capture of diagnostic data upon first failure". To quote fromOracle Fusion Middleware Administrator's Guide 11g Release 1 (11.1.1)Chapter 13 Diagnosing Problems "When a critical error occurs ... the Diagnostic Framework automatically collects diagnostics, such as thread dumps, DMS metric dumps, and WebLogic Diagnostics Framework (WLDF) server image dumps ... The data is stored in a file-based repository and is accessible with command-line utilities." In other words the data collected upon first failure - especially the thread and image dumps - provides a snapshot of the system as or immediately after the problem occurs. The table below shows the type of WebLogic Server issues which fall into the scope of Diagnostic Framework How to Configure Diagnostic Framework? Depending on your Fusion Middleware product choice you may not need to do anything! Diagnostic Framework is automatically installed, configured and initiated for any WebLogic Domain which has the Oracle Java Required Files (JRF) template applied. This template is applied by default whenever you configure WebLogic Managed Servers for products such as Portal / Forms / Reports / Discoverer Identity Management ( OID , OAM , OIM etc) WebCenter SOA Check your WebLogic Domain directory structure. If you have an "adr" sub directory under DOMAIN_HOME/servers/<servername>/ then JRF template has been applied and Diagnostic Framework will be in play. Should the "adr" sub directory not exist, review the advice given in My Oracle Support article How to Apply FMW ( EM ) Control and JRF to a WebLogic Domain and Managed Servers [ID 947043.1] If you are working with a standalone WebLogic Server solution and applying Oracle JRF is not acceptable, consider using WLDF - WebLogic Diagnostic Framework. (Fusion Middleware Diagnostic Framework makes use of WLDF under the covers.) Couple of useful links about WLDF are listed below Configuring and Using the Diagnostics Framework for Oracle WebLogic Server 11g WebLogic Diagnostics Framework-A Very Useful Tool [A nice blog which describes a WLDF use case] How to Get Started With Diagnostic Framework To be frank, the Fusion Middleware Administrator's Guide is the best place to start your learning Oracle Fusion Middleware Administrator's Guide 11g Release 1 (11.1.1)Chapter 13 Diagnosing Problems A lot of reading here,  but if you are in hurry and just want to get the right information to Oracle Support to help resolve your issue, check out the next section below. How to Upload Diagnostic Framework Incident Data to Oracle Support Some Background Information There are three interfaces to the Repository: Enterprise Manager Cloud Control (Support Workbench) WLST (Command Line) ADRCI (Command Line) The Enterprise Manager Cloud Control does provide a nice GUI interface to search, view and package diagnostic framework incidents. However, this software is not to be confused with Fusion Middleware (EM) Control. Cloud Control (formerly known as Grid Control) is part of the Enterprise Manager media package. EM Cloud Control has it's own install and configuration story. Therefore, for the benefit of those yet to install and play with Cloud Control, I am going to describe how to use the command line tools. Ideally, you would only need to one command line interface, but currently I suggest using both - mainly due to the fact that ADRCI SHOW INCIDENTS does not reveal the description behind the Diagnostic Framework error code. Instructions: Note: WLST and ADRCI are case sensitive when it comes to handling parameter values. If you make a mistake, expect an unfriendly syntax error message. 1) Find the incident Note: The managed server which you are troubleshooting must be up and running. If the managed server is down, ensure the domain's Admin Server is accessible. If you cannot connect to the Admin Server or the Managed Server the example WLST commands will not work. a) Launch WLST  Note: Use the WLST which resides in the "oracle_common" directory (not WL_HOME/common/bin) otherwise you will get a syntax error like the one below Traceback (innermost last):  File "<console>", line 1, in ?NameError: listIncidents MW_HOME/oracle_common/common/bin/wlst.sh b) Connect to the managed server or the admin server e.g. wls:/offline> connect('weblogic','welcome1','t3://localhost:7020') c) Run the command wls:/MyDomain/serverConfig> listIncidents() This will list the incidents for the server to which you have connected. If you have connected to the Admin Server and want to list the incidents for a managed server within the domain, use the command wls:/MyDomain/serverConfig> listIncidents(adrHome='diag\ofm\MyDomain\MyManagedServer' ,server='MyManagedServer') Example output Incident Id     Problem Key              Incident Time         1       DFW-99998 [java.lang.NullPointerException] [oracle.error.simulator.ErrorSimulator.createNullPointerException][errorWebApp_1-0-0-0]        Fri Nov 02 10:38:46 GMT 2012  The piece highlighted in bold is the description you do not see when using the ADRCI 'SHOW INCIDENT' command. Make a note of the incident id. You are ready to move to step 2 2. Package the incident a) Set up the environment - example commands below are for Unix cd <DOMAIN_HOME>/bin . ./setDomainEnv.sh If you want ADRCI to run a Remote Diagnostic Agent collection (recommended) at generate package time, point ORACLE_HOME at oracle_common ORACLE_HOME=$MW_HOME/oracle_common; export ORACLE_HOME To prevent ADRCI from running RDA at generate package time, point ORACLE_HOME at WL_HOME/server/adr directory.  ORACLE_HOME=$WL_HOME/server/adr; export ORACLE_HOME b) Launch adrci $WL_HOME/server/adr/adrci c) Set BASE and HOMEPATH adrci> SET BASE /oracle/middleware/user_projects/domains/ mydomain/servers/mymanagedserver/adr adrci> SET HOMEPATH diag/ofm/mydomain/mymanagedserver d)  Optionally run SHOW INCIDENTS e.g. adrci> SHOW INCIDENTS -MODE DETAIL ADR Home = /oracle/middleware/user_projects/domains/mydomain/ servers/mymanagedserver/adr/diag/ofm/mydomain/mymanagedserver:***********************************************************************************************************************************INCIDENT INFO RECORD 1**********************************************************   INCIDENT_ID                   1   STATUS                        ready   CREATE_TIME                   2012-11-02 10:38:46.468000 +00:00   PROBLEM_ID                    1   CLOSE_TIME                    <NULL>   FLOOD_CONTROLLED              none   ERROR_FACILITY                DFW   ERROR_NUMBER                  99998   ERROR_ARG1                    <NULL>   ERROR_ARG2                    <NULL>   ERROR_ARG3                    <NULL>   ERROR_ARG4                    <NULL>   ERROR_ARG5                    <NULL>   ERROR_ARG6                    <NULL>   ERROR_ARG7                    <NULL>   ERROR_ARG8                    <NULL>   ERROR_ARG9                    <NULL>   ERROR_ARG10                   <NULL>   ERROR_ARG11                   <NULL>   ERROR_ARG12                   <NULL>   SIGNALLING_COMPONENT          <NULL>   SIGNALLING_SUBCOMPONENT       <NULL>   SUSPECT_COMPONENT             <NULL>   SUSPECT_SUBCOMPONENT          <NULL>   ECID                          5162744c6a2eea5e:155ff445:13ac0aae7cb:-8000-0000000000000325   IMPACTS                       01 rows fetched e)  Create a logical package IPS CREATE PACKAGE INCIDENT incident_number e.g. adrci> IPS CREATE PACKAGE INCIDENT 1Created package 1 based on incident id 1, correlation level typical f) Generate the package IPS GENERATE PACKAGE package_number IN path e.g. adrci> IPS GENERATE PACKAGE 1 IN /tmp Generated package 1 in file /tmp/DFW99998j_20121102113633_COM_1.zip, mode complete Note: If the generate package command hangs, ADRCI may be experiencing an issue when running RDA. To avoid such trouble, exit ADRCI and point the ORACLE_HOME environment variable at WL_HOME/server/adr 3) Upload the package zip to Oracle Support via your Service Request a) Log into My Oracle Support and locate your Service Request b) Click on "Add Attachments c) And upload the zip file

    Read the article

  • CodePlex Daily Summary for Thursday, June 21, 2012

    CodePlex Daily Summary for Thursday, June 21, 2012Popular ReleasesuComponents: uComponents v3.1.1: Continuing on from 84817, we are proud to announce our 3.1.1 release! The following issues have been resolved: 14640 14696 14704 14724 Please note: This release is not to be confused with the upcoming 80410 (which will support .NET 4.0)MVVM Light Toolkit: V4RTM (binaries only) including Windows 8 RP: This package contains all the latest DLLs for MVVM Light V4 RTM. It includes the DLLs for Windows 8 Release Preview. An updated Nuget package is also available at http://nuget.org/packages/MvvmLightLibs An installer with binaries, snippets and templates will follow ASAP.Weapsy - ASP.NET MVC CMS: 1.0.0: - Some changes to Layout and CSS - Changed version number to 1.0.0.0 - Solved Cache and Session items handler error in IIS 7 - Created the Modules, Plugins and Widgets Areas - Replaced CKEditor with TinyMCE - Created the System Info page - Minor changesAuto Proxy Configuration: Windows Proxy Setup V1.2: Bug FixesXDA ROM Hub: XDA ROM HUB v0.5: Added XRH Backup -- Backup & restore data, system and cache USE AT YOUR OWN RISK! - USE ONLY IN RECOVERY!AcDown????? - AcDown Downloader Framework: AcDown????? v3.11.7: ?? ●AcDown??????????、??、??????。????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。 ●??????AcPlay?????,??????、????????????????。 ● AcDown??????????????????,????????????????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ????32??64? Windows XP/Vista/7/8 ??:????????Windows XP???,?????????.NET Framework 2.0???(x86),?????"?????????"??? ??????????????,??????????: ??"AcDown?????"????????? ...Apex: Apex 1.4: Apex 1.4Apex 1.4 provides a framework for rapid MVVM development. Download Apex 1.4 to get the core binaries, Visual Studio Extensions, Project Templates, Samples and Documentation. The 1.4 Release provides a vast number of enhancements via the Apex Broker. The Apex Broker is an object that can be used to retrieve models, get the view for a view model and more, much like an IoC container. The new Zune Style application templates for WPF and Silverlight give a great starting point for makin...NShader - HLSL - GLSL - CG - Shader Syntax Highlighter AddIn for Visual Studio: NShader 1.3 - VS2010 + VS2012: This is a small maintenance release to support new VS2012 as well as VS2010. This release is also fixing the issue The "Comment Selection" include the first line after the selection If the new NShader version doesn't highlight your shader, you can try to: Remove the registry entry: HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\11.0\FontAndColors\Cache Remove all lines using "fx" or "hlsl" in file C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\CommonExtensions\Micr...JSON Toolkit: JSON Toolkit 4.0: Up to 2.5x performance improvement in stringify operations Up to 1.7x performance improvement in parse operations Improved error messages when parsing invalid JSON strings Extended support to .Net 2.0, .Net 3.5, .Net 4.0, Silverlight 4, Windows Phone, Windows 8 metro apps and Xbox JSON namespace changed to ComputerBeacon.Json namespaceXenta Framework - extensible enterprise n-tier application framework: Xenta Framework 1.8.0: System Requirements OS Windows 7 Windows Vista Windows Server 2008 Windows Server 2008 R2 Web Server Internet Information Service 7.0 or above .NET Framework .NET Framework 4.0 WCF Activation feature HTTP Activation Non-HTTP Activation for net.pipe/net.tcp WCF bindings ASP.NET MVC ASP.NET MVC 3.0 Database Microsoft SQL Server 2005 Microsoft SQL Server 2008 Microsoft SQL Server 2008 R2 Additional Deployment Configuration Started Windows Process Activation service Start...ASP.NET REST Services Framework: Release 1.3 - Standard version: The REST-services Framework v1.3 has important functional changes allowing to use complex data types as service call parameters. Such can be mapped to form or query string variables or the HTTP Message Body. This is especially useful when REST-style service URLs with POST or PUT HTTP method is used. Beginning from v1.1 the REST-services Framework is compatible with ASP.NET Routing model as well with CRUD (Create, Read, Update, and Delete) principle. These two are often important when buildin...NanoMVVM: a lightweight wpf MVVM framework: v0.10 stable beta: v0.10 Minor fixes to ui and code, added error example to async commands, separated project into various releases (mainly into logical wholes), removed expression blend satellite assembliesMFCMAPI: June 2012 Release: Build: 15.0.0.1034 Full release notes at SGriffin's blog. If you just want to run the MFCMAPI or MrMAPI, get the executables. If you want to debug them, get the symbol files and the source. The 64 bit builds will only work on a machine with Outlook 2010 64 bit installed. All other machines should use the 32 bit builds, regardless of the operating system. Facebook BadgeMonoGame - Write Once, Play Everywhere: MonoGame 2.5.1: Release Notes The MonoGame team are pleased to announce that MonoGame v2.5.1 has been released. This release contains important bug fixes and minor updates. Recent additions include project templates for iOS and MacOS. The MonoDevelop.MonoGame AddIn also works on Linux. We have removed the dependency on the thirdparty GamePad library to allow MonoGame to be included in the debian/ubuntu repositories. There have been a major bug fix to ensure textures are disposed of correctly as well as some ...????: ????2.0.2: 1、???????????。 2、DJ???????10?,?????????10?。 3、??.NET 4.5(Windows 8)????????????。 4、???????????。 5、??????????????。 6、???Windows 8????。 7、?????2.0.1???????????????。 8、??DJ?????????。Azure Storage Explorer: Azure Storage Explorer 5 Preview 1 (6.17.2012): Azure Storage Explorer verison 5 is in development, and Preview 1 provides an early look at the new user interface and some of the new features. Here's what's new in v5 Preview 1: New UI, similar to the new Windows Azure HTML5 portal Support for configuring and viewing storage account logging Support for configuring and viewing storage account monitoring Uses the Windows Azure 1.7 SDK libraries Bug fixesCodename 'Chrometro': Developer Preview: Welcome to the Codename 'Chrometro' Developer Preview! This is the very first public preview of the app. Please note that this is a highly primitive build and the app is not even half of what it is meant to be. The Developer Preview sports the following: 1) An easy to use application setup. 2) The Assistant which simplifies your task of customization. 3) The partially complete Metro UI. 4) A variety of settings 5) A partially complete web browsing experience To get started, download the Ins...KangaModeling: Kanga Modeling 1.0: This is the public release 1.0 of Kanga Modeling. -Cosmos (C# Open Source Managed Operating System): Release 92560: Prerequisites Visual Studio 2010 - Any version including Express. Express users must also install Visual Studio 2010 Integrated Shell runtime VMWare - Cosmos can run on real hardware as well as other virtualization environments but our default debug setup is configured for VMWare. VMWare Player (Free). or Workstation VMWare VIX API 1.11AutoUpdaterdotNET : Autoupdate for VB.NET and C# Developer: AutoUpdater.NET 1.1: Release Notes New feature added that allows user to select remind later interval.New Projects.NET Heatmap: This is a simple project using C#, JQuery, and heatmap.js that allows you to create a heatmap for a web page using static data from a SQL database.Advanced Data Server: Advanced Data Server (ADS) is a library that enables you to create powerful server applications with little code.ARPAMISproject: This is an ambitious project that aims to provide an extensive business solution to schools, universities or any academic institutions alike. ArraySegments (by Stephen Cleary): Lightweight extension methods for ArraySegment<T>, particularly useful for byte arrays.Auto Proxy Configuration: This Tool sets proxy server automatically according to the DNSDomain.Bongiozzo Photosite: Simple photo site written using ASP.NET MVC 4.0 over Flickr API and Galleria image gallery framework. Cloud Media SharePoint Extension: With this extensions you can easily add media from the cloud like YouTube or Vimeo. Metadata from Vimeo or YouTube are also .and will be added tooContent Organizer Rule Manager SharePoint 2010: Create and manage content organizer rules faster for SharePoint 2010.Crm Customization Manager: Crm Customization Manager (CCM) by N.JL helps Dynamics CRM System Adiminstrators to easly Import Customisations and Treanslations with scheduling possibilitydemoHello: asp.net Ghost Puzzle: Protect your files with Ghost Puzzle.IonoWumpus: A simple Hunt the Wumpus implementation.LargeSky Personal Project: This a personal project, just for code place.Maze Game: Maze Game is a game for children/ computer beginners to practice the mouse movement with fun.NASM Develop IDE (The Open Source NASM Development Environment For Windows): A simple, light weight, all one in application that can help developers develop NASM applications in Windows without the need of remember fancy commands.Navigational: Gator brings navigation to projects built around life situations.Real Folders for Visual Studio: Real Folders for Visual Studio is a free plugin which makes Solution Folders map to real file system folders. With Real Folders you have the opportunity to organize your files in a simpler way than standard Visual Studio Solution Folders behave (completely uncommited to any folder on your file system). SaltFx: SaltFx is an N-Layered Domain Driven Design (DDD) framework for .NET development.SharePoint Location-Based Weather Webpart: Uses UPS information to display local weather for the user via the Yahoo Weather service.SharePoint Publishing: The Project is aimed at supplying Publishers with tools & design elements to provide a means of publishing through SharePoint.SpellLight - Lightweight Silverlight Spell Checker Library: SpellLight is a lightweight Silverlight Spell Check librarySUDOKU APP: NoneTrainer: This is an application used for logging runs, nutrition, weight, and other goals.weibospace: An ios projecct

    Read the article

  • OK - What now? How do we become a Social Business?

    - by Michael Snow
    We hope that those of you that attended yesterday's Webcast with Brian Solis enjoyed Brian's discussion with Christian Finn for our last Webcast of the season for the Oracle Social Business Thought Leaders Series.  For those of you that may have missed the webcast or were stuck at a company holiday party - you'll be glad to hear that the webcast will be available On-Demand starting later today (12/14/12). And any of you who'd like to listen to a quick but informative podcast with Brian - can listen to that here. Some of you may still be left with questions about how to get from point A to point B and even more confused than when you started thinking about this new world of Digital Darwinism. The post below, grabbed from an abundance of great thought leadership prose on Brian's blog may help you frame the path you need to start walking sooner versus later to stay off of the endangered species list.  As you explore your path forward, please keep Oracle in mind - we do offer a wide range of solutions to help your organization 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} optimize the engagement for your customers, employees and partners. The Path from a Social Brand to a Social Business Brian Solis Originally posted May 2, 2012 I’ve been a long-time supporter of MediaTemple’s (MT)Residence program along with Gary Vaynerchuk, Neil Patel, and many others whom I respect. I wanted to share my “7 questions to answer to become a social business” with you here.. Social Media is pervasive and is becoming the new normal in corporate marketing. Brands who get this right are starting to build their own media networks rich with customer connections numbering in the millions. Right now, Coca-Cola has over 34 million fans on Facebook, but they’re hardly alone. Disney follows just behind with 29 million fans, Starbucks boasts 25 million, and Oreo, Red Bull, and Converse play host to over 20 million fans. If we were to look at other networks such as Twitter and Youtube, we would see a recurring theme. People are connecting en masse with the businesses they support and new media represents the ability to cultivate consumer relationships in ways not possible with traditional earned or paid media. Sounds great right? This might sound abrupt, but the truth is that we’re hardly realizing the potential of what lies before us. Everything begins with understanding not just how other brands are marketing themselves in social media, but also seeing what they’re not doing and envisioning what’s possible. We’re already approaching the first of many crossroads that new media will present. Do we take the path of a social brand or that of a social business? What’s the difference? A social brand is just that, a business that is remodeling or retrofitting its existing marketing practices to new media. A social business is something altogether different as it embraces introspection and extrospection to reevaluate internal and external processes, systems, and opportunities to transform into a living, breathing entity that adapts to market conditions and opportunities. It’s a tough decision to make right now especially at a time when all we read about is how much success many businesses are finding without having to answer this very question. With all of the newfound success in social networks, the truth is that we’re only just beginning to learn what’s possible and that’s where you come in. When compared to the investment in time and resources across the board, social media represents only a small part of the mix. But with your help, that’s all about to change. The CMO Survey, an organization that disseminates the opinions of top marketers in order to predict the future of markets, recently published a report that gave credence to the fact that social media is taking off. One of the most profound takeaways from the report was this gem; “The “like button” [in Facebook] packs more customer-acquisition punch than other demand-generating activities.” With insights like this, it’s easy to see why the race to social is becoming heated. The report also highlighted exactly where social fits in the marketing mix today and as you can see, despite all of the hype, it’s not a dominant focus yet. As of August 2011, the percentage of overall marketing budgets dedicated to social media hovered at around 7%. However, in 2012 the investment in social media will climb to 10%. And, in five years, social media is expected to represent almost 18% of the total marketing budget. Think about that for a moment. In 2016, social media will only represent 18%? Queue the sound of a record scratching here. With businesses finding success in social networks, why are businesses failing to realize the true opportunity brought forth by the ability to listen to, connect with, and engage with customers? While there’s value in earning views, driving traffic, and building connections through the 3F’s (friends, fans and followers), success isn’t just defined simply by what really amounts to low-hanging fruit. The truth is that businesses cannot measure what it is they don’t know to value. As a result, innovation in new engagement initiatives is stifled because we’re applying dated or inflexible frameworks to new paradigms. Social media isn’t owned by marketing, but instead the entire organization. This changes everything and makes your role so much more important. It’s up to you to learn how to think outside of the proverbial social media box to see what others don’t, the ability to improve customers experiences through the evolution of a social brand into a social business. Doing so will translate customer insights from what they do and don’t share in social networks into better products, services, and processes. See, customers want something more from their favorite businesses than creative campaigns, viral content, and everyday dialogue in social networks. Customers want to be heard and they want to know that you’re listening. How businesses use social media must remind them that they’re more than just an audience, consumer, or a conduit to “trigger” a desired social effect. Herein lies both the challenge and opportunity of social media. It’s bigger than marketing. It’s also bigger than customer service. It’s about building relationships with customers that improve experiences and more importantly, teaches businesses how to re-imagine products and internal processes to better adapt to potential crises and seize new opportunities. When it comes down to it, Twitter, Facebook, Youtube, Foursquare, are all channels for listening, learning, and engaging. It’s what you do within each channel that builds a community around your brand. And, at the end of the day, the value of the community you build counts for everything. It’s important to understand that we cannot assume that these networks simply exist for people to lineup for our marketing messages or promotional campaigns. Nor can we assume that they’re reeling in anticipation for simple dialogue. They want value. They want recognition. They want access to exclusive information and offers. They need direction, answers and resolution. What we’re talking about here is the multidimensional makeup of consumers and how a one-sided approach to social media forces the needs for social media to expand beyond traditional marketing to socialize the various departments, lines of business, and functions to engage based on the nature of the situation or opportunity. In the same CMO study, it was revealed that marketers believe that social media has a long way to go toward integrating into the overall company strategy. On a scale of 1-7, with one being “not integrated at all” and seven being “very integrated,” 22% chose “one.” Critical functions such as service, HR, sales, R&D, product marketing and development, IR, CSR, etc. are either not engaged or are operating social media within a silo disconnected from other efforts or possibilities. The problem is that customers don’t view a company by silo, instead they see one company, one brand, and their experience in social media forms an impression that eventually contributes to their view of your brand. The first step here is to understand business priorities and objectives to assess how social media can be additive in achieving these goals. Additionally, surveying the landscape to determine other areas of interest as its specifically related to your business. • Are customers seeking help or direction? • Who are your most valuable customers and what are they sharing? • How can you use social media to acquire and retain customers? - What ideas are circulating and how can you harness user generated activity and content to innovate or adapt to better meet the needs of customers? - How can you broaden a single customer view to recognize the varying needs of customers and how your organization can organize around each circumstance? - What insights exist based on how consumers are interacting with one another? How can this intelligence inform marketing, service, products and other important business initiatives? - How can your business extend their current efforts to deliver better customer experiences and in turn more effectively unit internal collaboration and communication? Customer demands far exceed the capabilities of the marketing department. While creating a social brand is a necessary endeavor, building a social business is an investment in customer relevance now and over time. Beyond relevance, a social business fosters a culture of change that unites employees and customers and sets a foundation for meaningful and beneficial relationships. Innovation, communication, and creativity are the natural byproducts of engagement and transformation. As a social brand, we are competing for the moment. As a social business, we are competing the future in all that we do today.

    Read the article

< Previous Page | 141 142 143 144 145 146 147 148  | Next Page >