Search Results

Search found 10947 results on 438 pages for 'product comparison'.

Page 332/438 | < Previous Page | 328 329 330 331 332 333 334 335 336 337 338 339  | Next Page >

  • MySQL Database Query Problem

    - by moustafa
    I need your help!!!. I need to query a table in my database that has record of goods sold. I want the query to detect a particular product and also calculate the quantity sold. The product are 300 now, but it would increase in the future. Below is a sample of my DB Table #---------------------------- # Table structure for litorder #---------------------------- CREATE TABLE `litorder` ( `id` int(10) NOT NULL auto_increment, `name` varchar(50) NOT NULL default '', `address` varchar(50) NOT NULL default '', `xdate` date NOT NULL default '0000-00-00', `ref` varchar(20) NOT NULL default '', `code1` varchar(50) NOT NULL default '', `code2` varchar(50) NOT NULL default '', `code3` varchar(50) NOT NULL default '', `code4` varchar(50) NOT NULL default '', `code5` varchar(50) NOT NULL default '', `code6` varchar(50) NOT NULL default '', `code7` varchar(50) NOT NULL default '', `code8` varchar(50) NOT NULL default '', `code9` varchar(50) NOT NULL default '', `code10` varchar(50) NOT NULL default '', `code11` varchar(50) character set latin1 collate latin1_bin NOT NULL default '', `code12` varchar(50) NOT NULL default '', `code13` varchar(50) NOT NULL default '', `code14` varchar(50) NOT NULL default '', `code15` varchar(50) NOT NULL default '', `product1` varchar(100) NOT NULL default '0', `product2` varchar(100) NOT NULL default '0', `product3` varchar(100) NOT NULL default '0', `product4` varchar(100) NOT NULL default '0', `product5` varchar(100) NOT NULL default '0', `product6` varchar(100) NOT NULL default '0', `product7` varchar(100) NOT NULL default '0', `product8` varchar(100) NOT NULL default '0', `product9` varchar(100) NOT NULL default '0', `product10` varchar(100) NOT NULL default '0', `product11` varchar(100) NOT NULL default '0', `product12` varchar(100) NOT NULL default '0', `product13` varchar(100) NOT NULL default '0', `product14` varchar(100) NOT NULL default '0', `product15` varchar(100) NOT NULL default '0', `price1` int(10) NOT NULL default '0', `price2` int(10) NOT NULL default '0', `price3` int(10) NOT NULL default '0', `price4` int(10) NOT NULL default '0', `price5` int(10) NOT NULL default '0', `price6` int(10) NOT NULL default '0', `price7` int(10) NOT NULL default '0', `price8` int(10) NOT NULL default '0', `price9` int(10) NOT NULL default '0', `price10` int(10) NOT NULL default '0', `price11` int(10) NOT NULL default '0', `price12` int(10) NOT NULL default '0', `price13` int(10) NOT NULL default '0', `price14` int(10) NOT NULL default '0', `price15` int(10) NOT NULL default '0', `quantity1` int(10) NOT NULL default '0', `quantity2` int(10) NOT NULL default '0', `quantity3` int(10) NOT NULL default '0', `quantity4` int(10) NOT NULL default '0', `quantity5` int(10) NOT NULL default '0', `quantity6` int(10) NOT NULL default '0', `quantity7` int(10) NOT NULL default '0', `quantity8` int(10) NOT NULL default '0', `quantity9` int(10) NOT NULL default '0', `quantity10` int(10) NOT NULL default '0', `quantity11` int(10) NOT NULL default '0', `quantity12` int(10) NOT NULL default '0', `quantity13` int(10) NOT NULL default '0', `quantity14` int(10) NOT NULL default '0', `quantity15` int(10) NOT NULL default '0', `amount1` int(10) NOT NULL default '0', `amount2` int(10) NOT NULL default '0', `amount3` int(10) NOT NULL default '0', `amount4` int(10) NOT NULL default '0', `amount5` int(10) NOT NULL default '0', `amount6` int(10) NOT NULL default '0', `amount7` int(10) NOT NULL default '0', `amount8` int(10) NOT NULL default '0', `amount9` int(10) NOT NULL default '0', `amount10` int(10) NOT NULL default '0', `amount11` int(10) NOT NULL default '0', `amount12` int(10) NOT NULL default '0', `amount13` int(10) NOT NULL default '0', `amount14` int(10) NOT NULL default '0', `amount15` int(10) NOT NULL default '0', `totalNaira` double(20,0) NOT NULL default '0', `totalDollar` int(20) NOT NULL default '0', PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 COMMENT='InnoDB free: 4096 kB; InnoDB free: 4096 kB; InnoDB free: 409'; #---------------------------- # Records for table litorder #---------------------------- insert into litorder values (27, 'Sanyaolu Fisayo', '14 Adegboyega Street Palmgrove Lagos', '2010-05-31', '', 'DL 001', 'DL 002', 'DL 003', '', '', '', '', '', '', '', '', '', '', '', '', 'AILMENT & PREVENTION DVD- ENGLISH', 'AILMENT & PREVENTION DVD- HAUSA', 'BEAUTY CD', '', '', '', '', '', '', '', '', '', '', '', '', 800, 800, 3000, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16, 16, 20, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 12800, 12800, 60000, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, '85600', 563), (28, 'Irenonse Esther', 'Lagos,Nigeria', '2010-06-01', '', 'DL 005', 'DL 008', 'FC 004', '', '', '', '', '', '', '', '', '', '', '', '', 'GET HEALTHY DVD', 'YOUR FUTURE DVD', 'FOREVER FACE CAP (YELLOW)', '', '', '', '', '', '', '', '', '', '', '', '', 1000, 900, 2000, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 2, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2000, 1800, 6000, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, '9800', 64), (29, 'Kalu Lekway', 'Lagos, Nigeria', '2010-06-01', '', 'DL 001', 'DL 003', '', '', '', '', '', '', '', '', '', '', '', '', '', 'AILMENT & PREVENTION DVD- ENGLISH', 'BEAUTY CD', '', '', '', '', '', '', '', '', '', '', '', '', '', 800, 3000, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 6, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2400, 18000, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, '20400', 133), (30, 'Dele', 'Ilupeju', '2010-06-02', '', 'DL 001', 'DL 003', '', '', '', '', '', '', '', '', '', '', '', '', '', 'AILMENT & PREVENTION DVD- ENGLISH', 'BEAUTY CD', '', '', '', '', '', '', '', '', '', '', '', '', '', 800, 3000, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 10, 10, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 8000, 30000, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, '38000', 250);

    Read the article

  • WiX Action Sequence

    - by Damian Vogel
    I was looking for list of actions and their sequence when running a WiX setup. Somehow the official website doesn't seem to provide any information. The basic problem is that I want to schedule my custom actions correctly. Typically I need to register a DLL with regsvr32.exe, and this can only be done once the files are copied to the harddrive. However the custom action <Custom Action="RegisterShellExt" After="InstallFiles"> failed with the error message "file not found". What I've done then is analizing the log of my MSI with WiX Edit, and I've found that the Action InstallFiles exists more than once. And effectively the files are written only the second time it appears. So I changed my custom action to the following : <Custom Action="RegisterShellExt" Before="InstallFinalize"> Here is the sequence I've extracted from the logs of my MSI: Action start 15:16:49: INSTALL. Action start 15:16:49: PrepareDlg. Action start 15:16:49: AppSearch. Action start 15:16:49: LaunchConditions. Action start 15:16:49: ValidateProductID. Action start 15:16:49: DIRCA_NEWRETARGETABLEPROPERTY1.5D429292039C46FCA3253E37B4DA262A. Action start 15:16:50: CostInitialize. Action start 15:16:50: FileCost. Action start 15:16:50: CostFinalize. Action start 15:16:50: WelcomeDlg. Action 15:16:51: LicenseAgreementDlg. Dialog created Action 15:16:53: CustomizeDlg. Dialog created Action 15:16:55: VerifyReadyDlg. Dialog created Action start 15:16:56: ProgressDlg. Action start 15:16:56: ExecuteAction. Action start 15:16:58: INSTALL. Action start 15:16:58: AppSearch. Action start 15:16:58: LaunchConditions. Action start 15:16:58: ValidateProductID. Action start 15:16:58: CostInitialize. Action start 15:16:59: FileCost. Action start 15:16:59: CostFinalize. Action start 15:16:59: InstallValidate. Action start 15:17:00: InstallInitialize. Action start 15:17:08: ProcessComponents. Action 15:17:09: GenerateScript. Generating script operations for action: Action ended 15:17:09: ProcessComponents. Return value 1. Action start 15:17:09: UnpublishFeatures. Action start 15:17:09: RemoveShortcuts. Action start 15:17:09: RemoveFiles. Action start 15:17:09: InstallFiles. Action start 15:17:10: CreateShortcuts. Action start 15:17:10: RegisterUser. Action start 15:17:10: RegisterProduct. Action start 15:17:10: PublishFeatures. Action start 15:17:10: PublishProduct. Action start 15:17:10: ConfigureInstaller. Action start 15:17:10: InstallFinalize. Action 15:17:10: ProcessComponents. Updating component registration Action 15:17:12: InstallFiles. Copying new files Action 15:17:21: CreateShortcuts. Creating shortcuts Action 15:17:21: RegisterProduct. Registering product Action 15:17:23: ConfigureInstaller. [[note: CustomAction]] Action 15:17:22: PublishFeatures. Publishing Product Features Begin CustomAction 'ConfigureInstaller' Action 15:17:28: RollbackCleanup. Removing backup files Action ended 15:17:28: InstallFinalize. Return value 1. Action start 15:17:28: RegisterShellExt. [[note: CustomAction]] Action ended 15:17:33: INSTALL. Return value 1. Action start 15:17:35: ExitDialog. Does anyone know an official listing?

    Read the article

  • Simple Convention Automapper for two-way Mapping (Entities to/from ViewModels)

    - by Omu
    UPDATE: this stuff has evolved into a nice project, see it at http://valueinjecter.codeplex.com check this out, I just wrote a simple automapper, it takes the value from the property with the same name and type of one object and puts it into another, and you can add exceptions (ifs, switch) for each type you may need so tell me what do you think about it ? I did it so I could do something like this: Product –> ProductDTO ProductDTO –> Product that's how it begun: I use the "object" type in my Inputs/Dto/ViewModels for DropDowns because I send to the html a IEnumerable<SelectListItem> and I receive a string array of selected keys back public void Map(object a, object b) { var pp = a.GetType().GetProperties(); foreach (var pa in pp) { var value = pa.GetValue(a, null); // property with the same name in b var pb = b.GetType().GetProperty(pa.Name); if (pb == null) { //no such property in b continue; } if (pa.PropertyType == pb.PropertyType) { pb.SetValue(b, value, null); } } } UPDATE: the real usage: the Build methods (Input = Dto): public static TI BuildInput<TI, T>(this T entity) where TI: class, new() { var input = new TI(); input = Map(entity, input) as TI; return input; } public static T BuildEntity<T, TI, TR>(this TI input) where T : class, new() where TR : IBaseAdvanceService<T> { var id = (long)input.GetType().GetProperty("Id").GetValue(input, null); var entity = LocatorConfigurator.Resolve<TR>().Get(id) ?? new T(); entity = Map(input, entity) as T; return entity; } public static TI RebuildInput<T, TI, TR>(this TI input) where T: class, new() where TR : IBaseAdvanceService<T> where TI : class, new() { return input.BuildEntity<T, TI, TR>().BuildInput<TI, T>(); } in the controller: public ActionResult Create() { return View(new Organisation().BuildInput<OrganisationInput, Organisation>()); } [AcceptVerbs(HttpVerbs.Post)] public ActionResult Create(OrganisationInput o) { if (!ModelState.IsValid) { return View(o.RebuildInput<Organisation,OrganisationInput, IOrganisationService>()); } organisationService.SaveOrUpdate(o.BuildEntity<Organisation, OrganisationInput, IOrganisationService>()); return RedirectToAction("Index"); } The real Map method public static object Map(object a, object b) { var lookups = GetLookups(); var propertyInfos = a.GetType().GetProperties(); foreach (var pa in propertyInfos) { var value = pa.GetValue(a, null); // property with the same name in b var pb = b.GetType().GetProperty(pa.Name); if (pb == null) { continue; } if (pa.PropertyType == pb.PropertyType) { pb.SetValue(b, value, null); } else if (lookups.Contains(pa.Name) && pa.PropertyType == typeof(LookupItem)) { pb.SetValue(b, (pa.GetValue(a, null) as LookupItem).GetSelectList(pa.Name), null); } else if (lookups.Contains(pa.Name) && pa.PropertyType == typeof(object)) { pb.SetValue(b, pa.GetValue(a, null).ReadSelectItemValue(), null); } else if (pa.PropertyType == typeof(long) && pb.PropertyType == typeof(Organisation)) { pb.SetValue(b, pa.GetValue<long>(a).ReadOrganisationId(), null); } else if (pa.PropertyType == typeof(Organisation) && pb.PropertyType == typeof(long)) { pb.SetValue(b, pa.GetValue<Organisation>(a).Id, null); } } return b; }

    Read the article

  • Binding menu items to a sitemap.

    - by Ricardo Deano
    Hello all..this is driving me nuts. I have a navigation menu I would like to display based upon user roles (using.net membership) After several hours and headaches (from banging my head against the desk) I was wondering if someone can point me in the error of my ways. Page: <body> <form runat="server"> <div class="page"> <div class="header"> <div class="loginDisplay"> <asp:LoginView ID="HeadLoginView" runat="server" EnableViewState="false"> <AnonymousTemplate> <a href="~/Login.aspx" ID="HeadLoginStatus" runat="server">Log In</a> </AnonymousTemplate> <LoggedInTemplate> Welcome <span class="bold"><asp:LoginName ID="HeadLoginName" runat="server" /></span>! [ <asp:LoginStatus ID="HeadLoginStatus" runat="server" LogoutAction="Redirect" LogoutText="Log Out" LogoutPageUrl="~/Open/Close.aspx"/> ] </LoggedInTemplate> </asp:LoginView> </div> <div class="clear hideSkiplink"> <asp:Menu ID="NavigationMenu" runat="server" CssClass="menu" IncludeStyleBlock="False" Orientation="Horizontal" DataSourceID="AugustSiteMap" /> <asp:SiteMapDataSource ID="AugustSiteMap" runat="server" ShowStartingNode="false"/> </div> </div> SiteMap: <?xml version="1.0" encoding="utf-8" ?> <siteMap xmlns="http://schemas.microsoft.com/AspNet/SiteMap-File-1.0" > <siteMapNode url="~/Default.aspx" title="Home" description="Home"> <siteMapNode title="Open Pages" description="Open Pages"> <siteMapNode url="~/Open/Login.aspx" title="Login Page" description="Login Page" roles="*"/> <siteMapNode url="~/Open/Close.aspx" title="Thank you for using Valpak Data Solutions Online Reporting" description="Thank you for using Valpak Data Solutions Online Reporting" roles="*"/> </siteMapNode> <siteMapNode title="Logged In Open Pages" description="Logged In Open Pages"> <siteMapNode url="~/Landing.aspx" title="Landing Page" description="Landing Page" roles="*"/> <siteMapNode url="~/ContactUs.aspx" title="Contact Us" description="Contact Us" roles="*"/> </siteMapNode> <siteMapNode title="Restricted Pages" description="Resticted Pages"> <siteMapNode url="~/Restricted/ProductSearch.aspx" title=" Product Search" description=" Product Search" roles="*"/> <siteMapNode url="~/Restricted/ReportOutput.aspx" title="Report Output" description="Report Output" roles="Admin"/> </siteMapNode> </siteMapNode> </siteMap> Webconfig: <roleManager enabled="true" /> <siteMap defaultProvider="XmlSiteMapProvider" enabled="true"> <providers> <add name="XmlSiteMapProvider" description="AugustSiteMap" type="System.Web.XmlSiteMapProvider " siteMapFile="AugustSiteMap.sitemap" securityTrimmingEnabled="true" /> </providers> </siteMap> How can I ensure that when the user is logged in, the appropriate menu items are displayed on the Landing page? Please excuse my ignorance. Still new to all of this and my current method of 'trial and error' has seen me reach suicide levels this morning!

    Read the article

  • NHibernate, VS 2010

    - by ??????
    ????????????, ANRY! ?????? ??????? ??? ??????????? ???????? ?? ????????????, ?????????? ? NHibernate. ??? ?? ???????? ???? ?????? "Hello NHibernate!". ???????????? ??????????? ????? ??????? ????????: ?? ????, ???? ?????, ??????, ?????. ?????????????? ?????? 4 ??????? ? MSSQL 2010: ?????(id_??????, ????????, ????), ??????(id_???????, ???, ???????), ?????(id_??????, id_???????, ?????????) ? ?????? ??????(id_?????? ??????, id_??????, id_??????, ??????????). ??????????????, ?????? 4 ??????: ?????, ??????, ?????, ?????? ??????. ?????? ??? ?????: ????? ?? ????????? 4 mapping-?????, ??? ?? ????? ???????????? ?????? ? ????? ???? Debug ???????? ????????? ??????: "Could not compile the mapping document: Sklad.products.hbm.xml". ?????? "????????" ?????????, ??? ??????. ? ??? ????? ???? ???????? ? ??? ?? ????? ??????? ? ?????????, ??????. P.S. ???? ?? ??????, ?????? ???????? ?? ?????: [email protected] Google translation (I cleaned this up a but, don't don't speak russian, someone else please improve if it's wrong) Hello, ANRY! Most recently, during the passage of the practice of the university, faced with NHibernate. I read your article "Hello NHibernate!". Took to implement something like a store: that is, a product, the customer order. Accordingly created 4 tables in MSSQL 2010: Goods (id_tovara, name, price) Client (id_klienta, name, surname) Order (id_zakaza, id_klienta, cost) Order Line (id_stroki order id_zakaza, id_tovara, quantity) Accordingly, created 4 classes: Product, Customer, Order, Order Line. my question is this: whether you want to create 4 mapping-file, or you can make only one? And when there is a Debug gives the following error: "Could not compile the mapping document: Sklad.products.hbm.xml. And "Build" is normal, no errors. In what may be the problem and how it can solve? Regards, Andrew. PS if not difficult, you can reply to e-mail: [email protected]

    Read the article

  • How to echo if field is not found?

    - by Fahad
    Hi I'm trying to figure out how to echo back if the value entered does not match when a database lookup is done. I'm using ajax to run the request and php to do the lookup ajax.js: function showResult(str) { if (str=="") { document.getElementById("description").innerHTML=""; return; } if (window.XMLHttpRequest) {// code for IE7+, Firefox, Chrome, Opera, Safari xmlhttp=new XMLHttpRequest(); } else {// code for IE6, IE5 xmlhttp=new ActiveXObject("Microsoft.XMLHTTP"); } xmlhttp.onreadystatechange=function() { if (xmlhttp.readyState==4 && xmlhttp.status==200) { document.getElementById("description").innerHTML=xmlhttp.responseText; } } xmlhttp.open("GET","getuser.php?voucher="+str,true); xmlhttp.send(null); } and getuser.php: <?php $q=$_GET["voucher"]; $con = mysql_connect('localhost', 'root', ''); if (!$con) { die('Could not connect: ' . mysql_error()); } mysql_select_db("test", $con); $sql="SELECT * FROM redemption WHERE voucher = '".$q."'"; $result = mysql_query($sql); echo "<table> <tr> <th>Name</th> <th>Product</th> <th>Address</th> <th>Status</th> </tr>"; while($row = mysql_fetch_array($result)) { echo "<tr>"; echo "<td>" . $row['name'] . "</td>"; echo "<td>" . $row['product'] . "</td>"; echo "<td>" . $row['address'] ." ".$row['city'] ." ".$row['province'] ." ".$row['postal'] . "</td>"; echo "<td>" . $row['status'] . "</td>"; echo "</tr>"; } echo "</table>"; mysql_close($con); ?> What I would like to do is that once the person enters an invalid or a voucher number that is not found I would like to return an error that "Voucher number is not found". There is also a column in the db that stores the status such as "redeemed" or "not redeemed". How could I check for both whether the voucher number exists and if it has already been redeemed? I assume it'd have to be a syntax such as $sql="SELECT * FROM redemption WHERE voucher = '".$q."'" AND status = 'not redeemed' and then use an else or case statement perhaps? Thanks in advance

    Read the article

  • How do I explain this to potential employers?

    - by ReferencelessBob
    Backstory: TL;DR: I've gained a lot of experience working for 5 years at one startup company, but it eventually failed. The company is gone and the owner MIA. When I left sixth-form college I didn't want to start a degree straight away, so when I met this guy who knew a guy who was setting up a publishing company and needed a 'Techie' I thought why not. It was a very small operation, he sent mailings to schools, waited for orders to start arriving, then ordered a short run of the textbooks to be printed, stuck them in an envelope posted them out. I was initially going to help him set up a computerized system for recording orders and payments, printing labels, really basic stuff and I threw it together in Access in a couple of weeks. He also wanted to start taking orders online, so I set up a website and a paypal business account. While I was doing this, I was also helping to do the day-to-day running of things, taking phone orders, posting products, banking cheques, ordering textbooks, designing mailings, filing end of year accounts, hiring extra staff, putting stamps on envelopes. I learned so much about things I didn't even know I needed to learn about. Things were pretty good, when I started we sold about £10,000 worth of textbooks and by my 4th year there we sold £250,000 worth of text books. Things were looking good, but we had a problem. Our best selling product had peaked and sales started to fall sharply, we introduced add on products through the website to boost sales which helped for a while, but we had simply saturated the market. Our plan was to enter the US with our star product and follow the same, slightly modified, plan as before. We setup a 1-866 number and had the calls forwarded to our UK offices. We contracted a fulfillment company, shipped over a few thousand textbooks, had a mailing printed and mailed, then sat by the phones and waited. Needless to say, it didn't work. We tried a few other things, at home and in the US, but nothing helped. We expanded in the good times, moving into bigger offices, taking on staff to do administrative and dispatch work, but now cashflow was becoming a problem and things got tougher. We did the only thing we could and scaled things right back, the offices went, the admin staff went, I stopped taking a wage and started working from home. Nothing helped. The business was wound up about about 2 years ago. In the end it turned out that the owner had built up considerable debt at the start of business and had not paid them off during good years, which left him in a difficult position when cashflow had started to dry up. I haven't been able to contact the owner since I found out. It took me a while to get back on my feet after that, but I'm now at University and doing a Computer Science degree. How do I show the experience I have without having to get into all the gory details of what happened?

    Read the article

  • Tridion New UI Preview Site is not reflecting with the changes unless pulished

    - by Ram G
    I have new UI setup and noticing that when ever I update a page it is not refreshing with the updated changes. I do not see either the page_{sessionId/GUID}.aspx created either. Checked the session preview DB and I see the changes in PAGE_CONTENT table with new rendered content, so seems like session preview is working fine but the Preview site is not able to get the changes and refresh the UI. I have checked all the preview handlers and mappings for .aspx and made sure they are correct in web.config. Any thoughts on why the preview site not showing up the changes? I have the session preview DB setup in cd_storage_conf.xml. <StorageBindings> <Bundle src="preview_dao_bundle.xml"/> </StorageBindings> <Wrappers> <Wrapper Name="SessionWrapper"> <Timeout>120000</Timeout> <Storage Type="persistence" Id="db-session-webservice" dialect="MSSQL" Class="com.tridion.storage.persistence.JPADAOFactory"> <Pool Type="jdbc" Size="5" MonitorInterval="60" IdleTimeout="120" CheckoutTimeout="120" /> <DataSource Class="com.microsoft.sqlserver.jdbc.SQLServerDataSource"> <Property Name="serverName" Value="localhost" /> <Property Name="portNumber" Value="1433" /> <Property Name="databaseName" Value="Tridion_Broker_SessionPreview" /> <Property Name="user" Value="usr" /> <Property Name="password" Value="pwd" /> </DataSource> </Storage> </Wrapper> </Wrappers> web.config (handlers): <add verb="GET" path="*.htm" type="Tridion.ContentDelivery.Preview.Web.StaticFileHandler" /> <add verb="GET" path="*.jpg" type="Tridion.ContentDelivery.Preview.Web.StaticFileHandler" /> <add verb="GET" path="*.png" type="Tridion.ContentDelivery.Preview.Web.StaticFileHandler" /> <add verb="GET" path="*.aspx" type="Tridion.ContentDelivery.Preview.Web.StaticFileHandler" /> <add verb="GET" path="*.html" type="Tridion.ContentDelivery.Preview.Web.StaticFileHandler" /> <add name="Tridion.ContentDelivery.Preview.Web.PreviewContentModule" type="Tridion.ContentDelivery.Preview.Web.PreviewContentModule" /> Log (timestamp and DEBUG prefix removed): ClaimStore - put: uri=taf:session:id, value=tridion_db59279b-7d37-4b2e-ad98-eaaa6af7038e ClaimStore - put: uri=taf:session:id, value=tridion_db59279b-7d37-4b2e-ad98-eaaa6af7038e ClaimStore - put: uri=taf:tracking:id, value=tridion_d1fa1017-a28d-4f48-a790-b74f78c69314 ClaimStore - put: uri=taf:tracking:id, value=tridion_d1fa1017-a28d-4f48-a790-b74f78c69314 SearchClaimProcessor - No match found for referrer string http://uidemo.practice.com/en/Product/musk.aspx SearchClaimProcessor - No match found for referrer string http://uidemo.practice.com/en/Product/musk.aspx ClaimStore - put: uri=taf:claim:ambientdata:footprintcartridge:devicetype, value=Desktop ClaimStore - put: uri=taf:claim:ambientdata:footprintcartridge:devicetype, value=Desktop ClaimStore - put: uri=taf:claim:ambientdata:footprintcartridge:mobiledevice, value=NotMobile ClaimStore - put: uri=taf:claim:ambientdata:footprintcartridge:acceptlanguage, value=en-US ClaimStore - put: uri=taf:claim:ambientdata:footprintcartridge:mobiledevice, value=NotMobile ClaimStore - put: uri=taf:claim:ambientdata:footprintcartridge:acceptlanguage, value=en-US PageHandler - The session wrappers are correctly installed. Any thoughts/pointers on what might be going wrong...? (sorry for the long post)

    Read the article

  • How to bind Data to Dropdownlist in Kendo Ui Mobile

    - by dinesh Haraveer
    I have been using Kendo Mobile to develop an application, previously same application i have done in Kendo web,it's works fine.The main problem is that i have to bind data to two dropdownlist which the below code i have written,when my application is running it show an error like "Microsoft JScript runtime error: Object doesn't support property or method 'append'". in HTML <div id="forms" data-role="view" data-title="Form Elements" data-init="initForm"> <table> <tr> <td> <label style="margin-left: 20px"> Company:</label> </td> <td> <select id="ddlCompany" style="width: 200px"> <option>Select Company</option> </select> </td> <td class="style1"> <label style="margin-left: 20px"> Category:</label> </td> <td> <select id="ddlCategory" style="width: 200px"> <option>Select Category</option> </select> </td> <td> <label style="margin-left: 20px"> Product :</label> </td> <td> <select id="ddlProduct" style="width: 200px"> <option>Select Product</option> </select> </td> </tr> </table> </div> function initForm() { $.ajax({ type: "POST", contentType: "application/json; charset=utf-8", url: "FlashReportMobileWebService.asmx/GetCompany", dataType: "json", success: function (data) { for (i = 0; i < data.d.length; i++) { ddlCompany.append($("<option></option>").val(data.d[i].Company).html(data.d[i].Company)); }; $("#ddlCompany").kendoDropDownList(); } }); $.ajax({ type: "POST", contentType: "application/json; charset=utf-8", url: "FlashReportMobileWebService.asmx/ToCategoryDropDown", dataType: "json", success: function (data) { for (i = 0; i < data.d.length; i++) { ddlCategory.append($("<option></option>").val(data.d[i].Category).html(data.d[i].Category)); }; $("#ddlCategory").kendoDropDownList(); }, failure: function (msg) { alert(msg); } }); } $("#ddlCategory").change( function (e) { var ddlProduct= $("#ddlProduct"); var dataItem = $("#ddlCategory").val(); $.ajax({ type: "POST", contentType: "application/json; charset=utf-8", data: "{'Category':'" + dataItem + "'}", url: "FlashReportWebService.asmx/ToFillProductDropDown", dataType: "json", success: function (data) { ddlProduct.empty(); for (i = 0; i < data.d.length; i++) { ddlProduct.append($("<option></option>").val(data.d[i].ProductName).html(data.d[i].ProductName)); }; $("#ddlProduct").kendoDropDownList(); }, failure: function (msg) { alert(msg); } }); }); var app = new kendo.mobile.Application(document.body); thanks for reading this

    Read the article

  • Optimizing Python code with many attribute and dictionary lookups

    - by gotgenes
    I have written a program in Python which spends a large amount of time looking up attributes of objects and values from dictionary keys. I would like to know if there's any way I can optimize these lookup times, potentially with a C extension, to reduce the time of execution, or if I need to simply re-implement the program in a compiled language. The program implements some algorithms using a graph. It runs prohibitively slowly on our data sets, so I profiled the code with cProfile using a reduced data set that could actually complete. The vast majority of the time is being burned in one function, and specifically in two statements, generator expressions, within the function: The generator expression at line 202 is neighbors_in_selected_nodes = (neighbor for neighbor in node_neighbors if neighbor in selected_nodes) and the generator expression at line 204 is neighbor_z_scores = (interaction_graph.node[neighbor]['weight'] for neighbor in neighbors_in_selected_nodes) The source code for this function of context provided below. selected_nodes is a set of nodes in the interaction_graph, which is a NetworkX Graph instance. node_neighbors is an iterator from Graph.neighbors_iter(). Graph itself uses dictionaries for storing nodes and edges. Its Graph.node attribute is a dictionary which stores nodes and their attributes (e.g., 'weight') in dictionaries belonging to each node. Each of these lookups should be amortized constant time (i.e., O(1)), however, I am still paying a large penalty for the lookups. Is there some way which I can speed up these lookups (e.g., by writing parts of this as a C extension), or do I need to move the program to a compiled language? Below is the full source code for the function that provides the context; the vast majority of execution time is spent within this function. def calculate_node_z_prime( node, interaction_graph, selected_nodes ): """Calculates a z'-score for a given node. The z'-score is based on the z-scores (weights) of the neighbors of the given node, and proportional to the z-score (weight) of the given node. Specifically, we find the maximum z-score of all neighbors of the given node that are also members of the given set of selected nodes, multiply this z-score by the z-score of the given node, and return this value as the z'-score for the given node. If the given node has no neighbors in the interaction graph, the z'-score is defined as zero. Returns the z'-score as zero or a positive floating point value. :Parameters: - `node`: the node for which to compute the z-prime score - `interaction_graph`: graph containing the gene-gene or gene product-gene product interactions - `selected_nodes`: a `set` of nodes fitting some criterion of interest (e.g., annotated with a term of interest) """ node_neighbors = interaction_graph.neighbors_iter(node) neighbors_in_selected_nodes = (neighbor for neighbor in node_neighbors if neighbor in selected_nodes) neighbor_z_scores = (interaction_graph.node[neighbor]['weight'] for neighbor in neighbors_in_selected_nodes) try: max_z_score = max(neighbor_z_scores) # max() throws a ValueError if its argument has no elements; in this # case, we need to set the max_z_score to zero except ValueError, e: # Check to make certain max() raised this error if 'max()' in e.args[0]: max_z_score = 0 else: raise e z_prime = interaction_graph.node[node]['weight'] * max_z_score return z_prime Here are the top couple of calls according to cProfiler, sorted by time. ncalls tottime percall cumtime percall filename:lineno(function) 156067701 352.313 0.000 642.072 0.000 bpln_contextual.py:204(<genexpr>) 156067701 289.759 0.000 289.759 0.000 bpln_contextual.py:202(<genexpr>) 13963893 174.047 0.000 816.119 0.000 {max} 13963885 69.804 0.000 936.754 0.000 bpln_contextual.py:171(calculate_node_z_prime) 7116883 61.982 0.000 61.982 0.000 {method 'update' of 'set' objects}

    Read the article

  • nhibernate cascade - problem with detached entities

    - by Chev
    I am going nuts here trying to resolve a cascading update/delete issue :-) I have a Parent Entity with a collection Child Entities. If I modify the list of Child entities in a detached Parent object, adding, deleting etc - I am not seeing the updates cascaded correctly to the Child collection. Mapping Files: <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="Domain" namespace="Domain"> <class name="Parent" table="Parent" > <id name="Id"> <generator class="guid.comb" /> </id> <version name="LastModified" unsaved-value="0" column="LastModified" /> <property name="Name" type="String" length="250" /> <bag name="ParentChildren" lazy="false" table="Parent_Children" cascade="all-delete-orphan" inverse="true"> <key column="ParentId" on-delete="cascade" /> <one-to-many class="ParentChildren" /> </bag> </class> <class name="ParentChildren" table="Parent_Children"> <id name="Id"> <generator class="guid.comb" /> </id> <version name="LastModified" unsaved-value="0" column="LastModified" /> <many-to-one name="Parent" class="Parent" column="ParentId" lazy="false" not-null="true" /> </class> </hibernate-mapping> Test [Test] public void Test() { Guid id; int lastModified; // add a child into 1st session then detach using(ISession session = Store.Local.Get<ISessionFactory>("SessionFactory").OpenSession()) { Console.Out.WriteLine("Selecting..."); Parent parent = (Parent) session.Get(typeof (Parent), new Guid("4bef7acb-bdae-4dd0-ba1e-9c7500f29d47")); id = parent.Id; lastModified = parent.LastModified + 1; // ensure the detached version used later is equal to the persisted version Console.Out.WriteLine("Adding Child..."); Child child = (from c in session.Linq<Child>() select c).First(); parent.AddChild(child, 0m); session.Flush(); session.Dispose(); // not needed i know } // attach a parent, then save with no Children using (ISession session = Store.Local.Get<ISessionFactory>("SessionFactory").OpenSession()) { Parent parent = new Parent("Test"); parent.Id = id; parent.LastModified = lastModified; session.Update(parent); session.Flush(); } } I assume that the fact that the product has been updated to have no children in its collection - the children would be deleted in the Parent_Child table. The problems seems to be something to do with attaching the Product to the new session? As the cascade is set to all-delete-orphan I assume that changes to the collection would be propagated to the relevant entities/tables? In this case deletes? What am I missing here? C

    Read the article

  • Insane SmartGWT + GWT situation... Error on instantiating ListGridRecord?

    - by Xandel
    Hi all, I am asking this here in the hope that someone has maybe come across this situation too... I have posted this on the SmartGWT forum: I am having an issue when trying to instantiate a ListGridRecord object on my server side. I am using the ListGrid on the client side, I want to use GWT's RPC to pass back an array of ListGridRecord objects to populate the grid with. I know that SmartGWT is designed to link to a datasource but I want full control over when I populate the grid and this shouldn't be as much of a nightmare as it is to do. I have searched high and low and cannot find anyone complaining about the same thing. The exception however (listed below) has come up (in my search findings) as a possible memory error - where increasing the memory (-Xmx512m argument) has apparently solved the problem. It did not, however, sort out mine. If anyone can shed any light on this I would greatly appreciate it! Here are my details: Developing using Eclipse Galileo on Ubuntu 9.04 (Jaunty) and GWT 2.0.3, I built the initial GWT project using the webAppCreator bundled with the GWT 2.0.3 release and imported the project into Eclipse as described on the GWT Getting Started Page (as using the GWT Eclipse plugin caused even more nightmares when trying to connect to a database - this is apparently due to using the Google App Engine and turning it off as all the posts suggested only causes ClassNotFound exceptions). The line that causes the error is literally: ListGridRecord a = new ListGridRecord(); The error I get is the following: 00:00:25.916 [WARN] Exception while dispatching incoming RPC call com.google.gwt.user.server.rpc.UnexpectedException : Service method 'public abstract java.lang.String za.co.company.product.client.service.EmployeeServi ce.getAllEmployeeAsListGridRecord()' threw an unexpected exception: java.lang.UnsatisfiedLinkError: com.smartgwt.client.util.LogUtil.setJSNIErrorHandl er()V at com.google.gwt.user.server.rpc.RPC.encodeResponseF orFailure(RPC.java:378) at com.google.gwt.user.server.rpc.RPC.invokeAndEncode Response(RPC.java:581) at com.google.gwt.user.server.rpc.RemoteServiceServle t.processCall(RemoteServiceServlet.java:188) at com.google.gwt.user.server.rpc.RemoteServiceServle t.processPost(RemoteServiceServlet.java:224) at com.google.gwt.user.server.rpc.AbstractRemoteServi ceServlet.doPost(AbstractRemoteServiceServlet.java :62) at javax.servlet.http.HttpServlet.service(HttpServlet .java:637) at javax.servlet.http.HttpServlet.service(HttpServlet .java:717) at org.mortbay.jetty.servlet.ServletHolder.handle(Ser vletHolder.java:487) at org.mortbay.jetty.servlet.ServletHandler.handle(Se rvletHandler.java:362) at org.mortbay.jetty.security.SecurityHandler.handle( SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(Se ssionHandler.java:181) at org.mortbay.jetty.handler.ContextHandler.handle(Co ntextHandler.java:729) at org.mortbay.jetty.webapp.WebAppContext.handle(WebA ppContext.java:405) at org.mortbay.jetty.handler.HandlerWrapper.handle(Ha ndlerWrapper.java:152) at org.mortbay.jetty.handler.RequestLogHandler.handle (RequestLogHandler.java:49) at org.mortbay.jetty.handler.HandlerWrapper.handle(Ha ndlerWrapper.java:152) at org.mortbay.jetty.Server.handle(Server.java:324) at org.mortbay.jetty.HttpConnection.handleRequest(Htt pConnection.java:505) at org.mortbay.jetty.HttpConnection$RequestHandler.co ntent(HttpConnection.java:843) at org.mortbay.jetty.HttpParser.parseNext(HttpParser. java:647) at org.mortbay.jetty.HttpParser.parseAvailable(HttpPa rser.java:211) at org.mortbay.jetty.HttpConnection.handle(HttpConnec tion.java:380) at org.mortbay.io.nio.SelectChannelEndPoint.run(Selec tChannelEndPoint.java:395) at org.mortbay.thread.QueuedThreadPool$PoolThread.run (QueuedThreadPool.java:488) Caused by: java.lang.UnsatisfiedLinkError: com.smartgwt.client.util.LogUtil.setJSNIErrorHandl er()V at com.smartgwt.client.util.LogUtil.setJSNIErrorHandl er(Native Method) at com.smartgwt.client.core.JsObject.(JsObjec t.java:30) at za.co.company.product.server.service.EmployeeServi ceImpl.getAllEmployeeAsListGridRecord(EmployeeServ iceImpl.java:83) at sun.reflect.NativeMethodAccessorImpl.invoke0(Nativ e Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Native MethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(De legatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.google.gwt.user.server.rpc.RPC.invokeAndEncode Response(RPC.java:562) at com.google.gwt.user.server.rpc.RemoteServiceServle t.processCall(RemoteServiceServlet.java:188) at com.google.gwt.user.server.rpc.RemoteServiceServle t.processPost(RemoteServiceServlet.java:224) at com.google.gwt.user.server.rpc.AbstractRemoteServi ceServlet.doPost(AbstractRemoteServiceServlet.java :62) at javax.servlet.http.HttpServlet.service(HttpServlet .java:637) at javax.servlet.http.HttpServlet.service(HttpServlet .java:717) at org.mortbay.jetty.servlet.ServletHolder.handle(Ser vletHolder.java:487) at org.mortbay.jetty.servlet.ServletHandler.handle(Se rvletHandler.java:362) at org.mortbay.jetty.security.SecurityHandler.handle( SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(Se ssionHandler.java:181) at org.mortbay.jetty.handler.ContextHandler.handle(Co ntextHandler.java:729) at org.mortbay.jetty.webapp.WebAppContext.handle(WebA ppContext.java:405) at org.mortbay.jetty.handler.HandlerWrapper.handle(Ha ndlerWrapper.java:152) at org.mortbay.jetty.handler.RequestLogHandler.handle (RequestLogHandler.java:49) at org.mortbay.jetty.handler.HandlerWrapper.handle(Ha ndlerWrapper.java:152) at org.mortbay.jetty.Server.handle(Server.java:324) at org.mortbay.jetty.HttpConnection.handleRequest(Htt pConnection.java:505) at org.mortbay.jetty.HttpConnection$RequestHandler.co ntent(HttpConnection.java:843) at org.mortbay.jetty.HttpParser.parseNext(HttpParser. java:647) at org.mortbay.jetty.HttpParser.parseAvailable(HttpPa rser.java:211) at org.mortbay.jetty.HttpConnection.handle(HttpConnec tion.java:380) at org.mortbay.io.nio.SelectChannelEndPoint.run(Selec tChannelEndPoint.java:395) at org.mortbay.thread.QueuedThreadPool$PoolThread.run (QueuedThreadPool.java:488) Thanks in advance! Xandel

    Read the article

  • Cannot create Desktop shortcut

    - by Pantelis
    I have a WiX project and I want to automatically create a ProgramMenu and Desktop shortcut. I've tried the following but the Desktop shortcut is not created. The ProgramMenu shortcut works great. <Product Id="*" Name="Application Name" Language="1033" Version="1.0.0.0" Manufacturer="Company Name"> <Package InstallerVersion="200" Compressed="yes" InstallScope="perMachine" Description="A description" Comments="Some Comments" /> <MajorUpgrade DowngradeErrorMessage="A newer version of [ProductName] is already installed." /> <MediaTemplate EmbedCab="yes"/> <!-- Minimal UI --> <UIRef Id="WixUI_Minimal"/> <!-- Adding the referenced components --> <Feature Id="Complete" Title="inStorHDRadio Complete" Level="1"> <ComponentGroupRef Id="InstallationComponents" /> <ComponentRef Id="ApplicationProgramsMenuShortcut"/> <ComponentRef Id="ApplicationDesktopShortcut"/> </Feature> </Product> <Fragment> <Directory Id="TARGETDIR" Name="SourceDir"> <!-- Installation Folder --> <Directory Id="ProgramFilesFolder"> <Directory Id="CompanyFolder" Name="CompanyName"> <Directory Id="InstallationFolder" Name="ApplicationName"/> </Directory> </Directory> <!-- Programs Menu Shortcut Folder --> <Directory Id="ProgramMenuFolder" Name="ProgramsMenu"> <Directory Id="ProgramsMenuCompanyFolder" Name="CompanyName"> <Directory Id="ProgramsMenuShortcutFolder" Name="ApplicationName"/> </Directory> </Directory> <!-- Desktop Shortcut Folder --> <Directory Id="DesktopShortcutFolder" Name="Desktop"/> </Directory> </Fragment> <!-- Compoments --> <Fragment> <ComponentGroup Id="inStorHDRadioComponents" Directory="InstallationFolder"> <!-- All application components in Program Files --> </ComponentGroup> <!-- SHORTCUTS --> <!--ProgramsMenu--> <DirectoryRef Id='ProgramsMenuShortcutFolder'> <Component Id='ApplicationProgramsMenuShortcut'> <RemoveFolder Id='RemoveProgramsMenuShortcutFolder' Directory='ProgramsMenuShortcutFolder' On='uninstall' /> <RemoveFolder Id='RemoveProgramsMenuCompanyFolder' Directory='ProgramsMenuCompanyFolder' On='uninstall' /> <Shortcut Id='ApplicationProgramsMenuShortcut' Name='Company Name' Target='[#Application.exe]' WorkingDirectory='InstallationFolder' Icon='application.ico' /> <RegistryValue Name='RegistryValueProgramMenuShortcut' Root='HKCU' Key='Software\Microsoft\[Manufacturer]\[ProductName]' Type='integer' Value='1' /> </Component> </DirectoryRef> <!--Desktop--> <DirectoryRef Id='DesktopShortcutFolder'> <Component Id='ApplicationDesktopShortcut'> <RemoveFolder Id='RemoveDesktopShortcutFolder' Directory='DesktopShortcutFolder' On='uninstall'/> <Shortcut Id='ApplicationDesktopShortcut' Name='Application Name' Target='[#Bootstrapper.exe]' WorkingDirectory='InstallationFolder' Directory='DesktopShortcutFolder' Advertise='no' Icon='application.ico'/> <RegistryValue Name='RegistryValDesktopShortcut' Root='HKCU' Key='Software\[Manufacturer]\[ProductName]' KeyPath='yes' Type='integer' Value='1' /> </Component> </DirectoryRef> </Fragment> <Fragment> <Icon Id="application.ico" SourceFile="Files\application.ico" /> <Icon Id="programs.ico" SourceFile="Files\programs.ico"/> <Property Id="ARPPRODUCTICON" Value="programs.ico" /> <Property Id="ARPHELPLINK" Value="http://www.company.com" /> </Fragment> Whats wrong with the code? The ProgramMenu shortcut is working perfectly fine, but the desktop one is not getting created.

    Read the article

  • How to cope with developing against a poor 3rd party API/application?

    - by wsanville
    I'm a web developer, and my organization has recently started to use a proprietary ASP.NET CMS for our web sites. I was excited to get started using the CMS, thinking it would bring a lot of value to our end users and be fun to work with, since my skills are a good match for the types of projects we're using it for. That was about a year ago. Since then, we've ran into all kinds of issues, from blatant bugs in the product, to nasty edge cases in the APIs, to extremely poor documentation for developers. On about a weekly basis, we are forced to pursue workarounds and rewrite some of the out of the box functionality, and even find some of the basic features unusable. In many cases, since this is a closed source application (and obfuscated of course), there's nothing we can do as developers to solve these issues. So my question is, how does one attempt to develop a good application in such a scenario? The application mostly works when using the the exact out of the box behavior, or using one of the company's starter sites. However, my attempts to use the underlying APIs to implement slightly different, yet reasonable behavior has proved to be extremely time consuming (not to mention just as buggy), given the lack of good information about the APIs. I've given this a lot of thought, and my conflicting viewpoints are the following: Strongly advise against any customization to the CMS, as development time will rise exponentially, or even have an extremely high chance of failing. While this is accurate, I do not want to give the impression that I am not willing to code my own solutions to problems and take the initiative to implement something difficult or complex. I don't want to be perceived as someone who is not motivated, lazy, or not knowledgeable to do anything complex, because this is simply not the case. I love coding my own solutions, trying new/difficult things, I just dislike the vendor app we're using. Continue on the path I'm on now, which is hacking my way past all issues I encounter and try my best to deliver an application that meets the needs and specs exactly. My goals are to make it as seamless and easy to use as possible to the end user, even when integrating the CMS with our other applications internally. The problem I'm finding with this approach is it is very time consuming. I open support cases with the vendor on a regular basis to solve issues and to gain knowledge of their APIs, but this is extremely time consuming, and in some cases it leads to dead ends. I post on the vendors forums on a regular basis but have become frustrated as most of my posts get 0 replies. So, what would you, a reasonable developer, do in this case? How can I make the best of the situation? And just for fun, here are some of the code smells and anti-patterns I've dealt with using the product (aside from their own code blatantly failing): Use of StringBuilder to concatenate a giant string that is hard coded and does not change. They use it to concatenate their Javascript and write it out into the body tags of their pages. Methods that accept object or Microsoft.VisualBasic.Collection as the parameters. In the case of the VB Collection, the data is not a list of any kind, it's used instead of making a class. Methods that return a Hashtable of VB Collections Method names of the form MethodName_v45, MethodName_v20, etc... Multiple classes with the same name in different namespaces with different functionality/behavior. Intellisense that reads "Note: this parameter is non functional" Complete lack of coding standards, API is filled with magic numbers and magic strings. Properties with a getter of type object that accepts totally different things, like enum or strings, and throw exceptions at runtime when you pass in something not supported. And much, much, more...

    Read the article

  • plotting multiple google maps to page

    - by Roland
    I'm trying to append more than one Google Map to a page. But it seems like I'm having some trouble. This would be the template I'm using to ( with Handlebars.js ) to create the same block more than once, about 50 times : <script type="text/x-handlebars-template"> {{#each productListing}} <div class="product-listing-wrapper"> <div class="product-listing"> <div class="left-side-content"> <div class="thumb-wrapper" data-image-link="{{ThumbnailUrl}}"> <i class="thumb"> <img src="{{ThumbnailUrl}}" alt="Thumb"> <span class="zoom-image"></span> </i> </div> <div class="google-maps-wrapper"> <div class="google-coordonates-wrapper"> <div class="google-coordonates"> <p>{{LatLon.Lat}}</p> <p>{{LatLon.Lon}}</p> </div> </div> <div class="google-maps-button"> <a class="google-maps" href="#">Google Maps</a> </div> </div> </div> <div class="right-side-content"> <div class="map-canvas-wrapper"> <div id="map-canvas" class="map-canvas" data-latitude="{{LatLon.Lat}}" data-longitude="{{LatLon.Lon}}"></div> </div> <div class="content-wrapper"></div> </div> </div> </div> {{/each}} And I'm trying to append the map to the #map-canvas id. With the following block of code I'm doing the plotting : Cluster.prototype.initiate_map_assembling = function() { return $(this.map_canvas_wrapper_class).each(function(index, element) { var canvas = $(element).children(); var latitude = $(canvas).attr('data-latitude'); var longitude = $(canvas).attr('data-longitude'); var coordinates = new google.maps.LatLng(latitude, longitude); var options = { zoom: 9, center: coordinates, mapTypeId: google.maps.MapTypeId.ROADMAP }; var map = new google.maps.Map($(canvas), options); var marker = new google.maps.Marker({ position: coordinates, map: map }); }); }; This way I'm "looping" through all the parent classes of the id I'm trying to append the map to, but the map would only append to the first id. I tried to append it to all of the id's in other ways but with the same results. So what would you suggest me to do to make it work as I would expect it, append the map to each of the id's ?

    Read the article

  • How to reserve public API to internal usage in .NET?

    - by mark
    Dear ladies and sirs. Let me first present the case, which will explain my question. This is going to be a bit long, so I apologize in advance :-). I have objects and collections, which should support the Merge API (it is my custom API, the signature of which is immaterial for this question). This API must be internal, meaning only my framework should be allowed to invoke it. However, derived types should be able to override the basic implementation. The natural way to implement this pattern as I see it, is this: The Merge API is declared as part of some internal interface, let us say IMergeable. Because the interface is internal, derived types would not be able to implement it directly. Rather they must inherit it from a common base type. So, a common base type is introduced, which would implement the IMergeable interface explicitly, where the interface methods delegate to respective protected virtual methods, providing the default implementation. This way the API is only callable by my framework, but derived types may override the default implementation. The following code snippet demonstrates the concept: internal interface IMergeable { void Merge(object obj); } public class BaseFrameworkObject : IMergeable { protected virtual void Merge(object obj) { // The default implementation. } void IMergeable.Merge(object obj) { Merge(obj); } } public class SomeThirdPartyObject : BaseFrameworkObject { protected override void Merge(object obj) { // A derived type implementation. } } All is fine, provided a single common base type suffices, which is usually true for non collection types. The thing is that collections must be mergeable as well. Collections do not play nicely with the presented concept, because developers do not develop collections from the scratch. There are predefined implementations - observable, filtered, compound, read-only, remove-only, ordered, god-knows-what, ... They may be developed from scratch in-house, but once finished, they serve wide range of products and should never be tailored to some specific product. Which means, that either: they do not implement the IMergeable interface at all, because it is internal to some product the scope of the IMergeable interface is raised to public and the API becomes open and callable by all. Let us refer to these collections as standard collections. Anyway, the first option screws my framework, because now each possible standard collection type has to be paired with the respective framework version, augmenting the standard with the IMergeable interface implementation - this is so bad, I am not even considering it. The second option breaks the framework as well, because the IMergeable interface should be internal for a reason (whatever it is) and now this interface has to open to all. So what to do? My solution is this. make IMergeable public API, but add an extra parameter to the Merge method, I call it a security token. The interface implementation may check that the token references some internal object, which is never exposed to the outside. If this is the case, then the method was called from within the framework, otherwise - some outside API consumer attempted to invoke it and so the implementation can blow up with a SecurityException. Here is the modified code snippet demonstrating this concept: internal static class InternalApi { internal static readonly object Token = new object(); } public interface IMergeable { void Merge(object obj, object token); } public class BaseFrameworkObject : IMergeable { protected virtual void Merge(object obj) { // The default implementation. } public void Merge(object obj, object token) { if (!object.ReferenceEquals(token, InternalApi.Token)) { throw new SecurityException("bla bla bla"); } Merge(obj); } } public class SomeThirdPartyObject : BaseFrameworkObject { protected override void Merge(object obj) { // A derived type implementation. } } Of course, this is less explicit than having an internally scoped interface and the check is moved from the compile time to run time, yet this is the best I could come up with. Now, I have a gut feeling that there is a better way to solve the problem I have presented. I do not know, may be using some standard Code Access Security features? I have only vague understanding of it, but can LinkDemand attribute be somehow related to it? Anyway, I would like to hear other opinions. Thanks.

    Read the article

  • Bulk inserting best way to about it? + Helping me understand fully what I found so far

    - by chobo2
    Hi So I saw this post here and read it and it seems like bulk copy might be the way to go. http://stackoverflow.com/questions/682015/whats-the-best-way-to-bulk-database-inserts-from-c I still have some questions and want to know how things actually work. So I found 2 tutorials. http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx First way uses 2 ado.net 2.0 features. BulkInsert and BulkCopy. the second one uses linq to sql and OpenXML. This sort of appeals to me as I am using linq to sql already and prefer it over ado.net. However as one person pointed out in the posts what he just going around the issue at the cost of performance( nothing wrong with that in my opinion) First I will talk about the 2 ways in the first tutorial I am using VS2010 Express, .net 4.0, MVC 2.0, SQl Server 2005 Is ado.net 2.0 the most current version? Based on the technology I am using, is there some updates to what I am going to show that would improve it somehow? Is there any thing that these tutorial left out that I should know about? BulkInsert I am using this table for all the examples. CREATE TABLE [dbo].[TBL_TEST_TEST] ( ID INT IDENTITY(1,1) PRIMARY KEY, [NAME] [varchar](50) ) SP Code USE [Test] GO /****** Object: StoredProcedure [dbo].[sp_BatchInsert] Script Date: 05/19/2010 15:12:47 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[sp_BatchInsert] (@Name VARCHAR(50) ) AS BEGIN INSERT INTO TBL_TEST_TEST VALUES (@Name); END C# Code /// <summary> /// Another ado.net 2.0 way that uses a stored procedure to do a bulk insert. /// Seems slower then "BatchBulkCopy" way and it crashes when you try to insert 500,000 records in one go. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchInsert() { // Get the DataTable with Rows State as RowState.Added DataTable dtInsertRows = GetDataTable(); SqlConnection connection = new SqlConnection(connectionString); SqlCommand command = new SqlCommand("sp_BatchInsert", connection); command.CommandType = CommandType.StoredProcedure; command.UpdatedRowSource = UpdateRowSource.None; // Set the Parameter with appropriate Source Column Name command.Parameters.Add("@Name", SqlDbType.VarChar, 50, dtInsertRows.Columns[0].ColumnName); SqlDataAdapter adpt = new SqlDataAdapter(); adpt.InsertCommand = command; // Specify the number of records to be Inserted/Updated in one go. Default is 1. adpt.UpdateBatchSize = 1000; connection.Open(); int recordsInserted = adpt.Update(dtInsertRows); connection.Close(); } So first thing is the batch size. Why would you set a batch size to anything but the number of records you are sending? Like I am sending 500,000 records so I did a Batch size of 500,000. Next why does it crash when I do this? If I set it to 1000 for batch size it works just fine. System.Data.SqlClient.SqlException was unhandled Message="A transport-level error has occurred when sending the request to the server. (provider: Shared Memory Provider, error: 0 - No process is on the other end of the pipe.)" Source=".Net SqlClient Data Provider" ErrorCode=-2146232060 Class=20 LineNumber=0 Number=233 Server="" State=0 StackTrace: at System.Data.Common.DbDataAdapter.UpdatedRowStatusErrors(RowUpdatedEventArgs rowUpdatedEvent, BatchCommandInfo[] batchCommands, Int32 commandCount) at System.Data.Common.DbDataAdapter.UpdatedRowStatus(RowUpdatedEventArgs rowUpdatedEvent, BatchCommandInfo[] batchCommands, Int32 commandCount) at System.Data.Common.DbDataAdapter.Update(DataRow[] dataRows, DataTableMapping tableMapping) at System.Data.Common.DbDataAdapter.UpdateFromDataTable(DataTable dataTable, DataTableMapping tableMapping) at System.Data.Common.DbDataAdapter.Update(DataTable dataTable) at TestIQueryable.Program.BatchInsert() in C:\Users\a\Downloads\TestIQueryable\TestIQueryable\TestIQueryable\Program.cs:line 124 at TestIQueryable.Program.Main(String[] args) in C:\Users\a\Downloads\TestIQueryable\TestIQueryable\TestIQueryable\Program.cs:line 16 InnerException: Time it took to insert 500,000 records with insert batch size of 1000 took "2 mins and 54 seconds" Of course this is no official time I sat there with a stop watch( I am sure there are better ways but was too lazy to look what they where) So I find that kinda slow compared to all my other ones(expect the linq to sql insert one) and I am not really sure why. Next I looked at bulkcopy /// <summary> /// An ado.net 2.0 way to mass insert records. This seems to be the fastest. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchBulkCopy() { // Get the DataTable DataTable dtInsertRows = GetDataTable(); using (SqlBulkCopy sbc = new SqlBulkCopy(connectionString, SqlBulkCopyOptions.KeepIdentity)) { sbc.DestinationTableName = "TBL_TEST_TEST"; // Number of records to be processed in one go sbc.BatchSize = 500000; // Map the Source Column from DataTabel to the Destination Columns in SQL Server 2005 Person Table // sbc.ColumnMappings.Add("ID", "ID"); sbc.ColumnMappings.Add("NAME", "NAME"); // Number of records after which client has to be notified about its status sbc.NotifyAfter = dtInsertRows.Rows.Count; // Event that gets fired when NotifyAfter number of records are processed. sbc.SqlRowsCopied += new SqlRowsCopiedEventHandler(sbc_SqlRowsCopied); // Finally write to server sbc.WriteToServer(dtInsertRows); sbc.Close(); } } This one seemed to go really fast and did not even need a SP( can you use SP with bulk copy? If you can would it be better?) BatchCopy had no problem with a 500,000 batch size.So again why make it smaller then the number of records you want to send? I found that with BatchCopy and 500,000 batch size it took only 5 seconds to complete. I then tried with a batch size of 1,000 and it only took 8 seconds. So much faster then the bulkinsert one above. Now I tried the other tutorial. USE [Test] GO /****** Object: StoredProcedure [dbo].[spTEST_InsertXMLTEST_TEST] Script Date: 05/19/2010 15:39:03 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[spTEST_InsertXMLTEST_TEST](@UpdatedProdData nText) AS DECLARE @hDoc int exec sp_xml_preparedocument @hDoc OUTPUT,@UpdatedProdData INSERT INTO TBL_TEST_TEST(NAME) SELECT XMLProdTable.NAME FROM OPENXML(@hDoc, 'ArrayOfTBL_TEST_TEST/TBL_TEST_TEST', 2) WITH ( ID Int, NAME varchar(100) ) XMLProdTable EXEC sp_xml_removedocument @hDoc C# code. /// <summary> /// This is using linq to sql to make the table objects. /// It is then serailzed to to an xml document and sent to a stored proedure /// that then does a bulk insert(I think with OpenXML) /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertXMLBatch() { using (TestDataContext db = new TestDataContext()) { TBL_TEST_TEST[] testRecords = new TBL_TEST_TEST[500000]; for (int count = 0; count < 500000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; testRecords[count] = testRecord; } StringBuilder sBuilder = new StringBuilder(); System.IO.StringWriter sWriter = new System.IO.StringWriter(sBuilder); XmlSerializer serializer = new XmlSerializer(typeof(TBL_TEST_TEST[])); serializer.Serialize(sWriter, testRecords); db.insertTestData(sBuilder.ToString()); } } So I like this because I get to use objects even though it is kinda redundant. I don't get how the SP works. Like I don't get the whole thing. I don't know if OPENXML has some batch insert under the hood but I do not even know how to take this example SP and change it to fit my tables since like I said I don't know what is going on. I also don't know what would happen if the object you have more tables in it. Like say I have a ProductName table what has a relationship to a Product table or something like that. In linq to sql you could get the product name object and make changes to the Product table in that same object. So I am not sure how to take that into account. I am not sure if I would have to do separate inserts or what. The time was pretty good for 500,000 records it took 52 seconds The last way of course was just using linq to do it all and it was pretty bad. /// <summary> /// This is using linq to sql to to insert lots of records. /// This way is slow as it uses no mass insert. /// Only tried to insert 50,000 records as I did not want to sit around till it did 500,000 records. /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertAll() { using (TestDataContext db = new TestDataContext()) { db.CommandTimeout = 600; for (int count = 0; count < 50000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; db.TBL_TEST_TESTs.InsertOnSubmit(testRecord); } db.SubmitChanges(); } } I did only 50,000 records and that took over a minute to do. So I really narrowed it done to the linq to sql bulk insert way or bulk copy. I am just not sure how to do it when you have relationship for either way. I am not sure how they both stand up when doing updates instead of inserts as I have not gotten around to try it yet. I don't think I will ever need to insert/update more than 50,000 records at one type but at the same time I know I will have to do validation on records before inserting so that will slow it down and that sort of makes linq to sql nicer as your got objects especially if your first parsing data from a xml file before you insert into the database. Full C# code using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Xml.Serialization; using System.Data; using System.Data.SqlClient; namespace TestIQueryable { class Program { private static string connectionString = ""; static void Main(string[] args) { BatchInsert(); Console.WriteLine("done"); } /// <summary> /// This is using linq to sql to to insert lots of records. /// This way is slow as it uses no mass insert. /// Only tried to insert 50,000 records as I did not want to sit around till it did 500,000 records. /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertAll() { using (TestDataContext db = new TestDataContext()) { db.CommandTimeout = 600; for (int count = 0; count < 50000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; db.TBL_TEST_TESTs.InsertOnSubmit(testRecord); } db.SubmitChanges(); } } /// <summary> /// This is using linq to sql to make the table objects. /// It is then serailzed to to an xml document and sent to a stored proedure /// that then does a bulk insert(I think with OpenXML) /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertXMLBatch() { using (TestDataContext db = new TestDataContext()) { TBL_TEST_TEST[] testRecords = new TBL_TEST_TEST[500000]; for (int count = 0; count < 500000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; testRecords[count] = testRecord; } StringBuilder sBuilder = new StringBuilder(); System.IO.StringWriter sWriter = new System.IO.StringWriter(sBuilder); XmlSerializer serializer = new XmlSerializer(typeof(TBL_TEST_TEST[])); serializer.Serialize(sWriter, testRecords); db.insertTestData(sBuilder.ToString()); } } /// <summary> /// An ado.net 2.0 way to mass insert records. This seems to be the fastest. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchBulkCopy() { // Get the DataTable DataTable dtInsertRows = GetDataTable(); using (SqlBulkCopy sbc = new SqlBulkCopy(connectionString, SqlBulkCopyOptions.KeepIdentity)) { sbc.DestinationTableName = "TBL_TEST_TEST"; // Number of records to be processed in one go sbc.BatchSize = 500000; // Map the Source Column from DataTabel to the Destination Columns in SQL Server 2005 Person Table // sbc.ColumnMappings.Add("ID", "ID"); sbc.ColumnMappings.Add("NAME", "NAME"); // Number of records after which client has to be notified about its status sbc.NotifyAfter = dtInsertRows.Rows.Count; // Event that gets fired when NotifyAfter number of records are processed. sbc.SqlRowsCopied += new SqlRowsCopiedEventHandler(sbc_SqlRowsCopied); // Finally write to server sbc.WriteToServer(dtInsertRows); sbc.Close(); } } /// <summary> /// Another ado.net 2.0 way that uses a stored procedure to do a bulk insert. /// Seems slower then "BatchBulkCopy" way and it crashes when you try to insert 500,000 records in one go. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchInsert() { // Get the DataTable with Rows State as RowState.Added DataTable dtInsertRows = GetDataTable(); SqlConnection connection = new SqlConnection(connectionString); SqlCommand command = new SqlCommand("sp_BatchInsert", connection); command.CommandType = CommandType.StoredProcedure; command.UpdatedRowSource = UpdateRowSource.None; // Set the Parameter with appropriate Source Column Name command.Parameters.Add("@Name", SqlDbType.VarChar, 50, dtInsertRows.Columns[0].ColumnName); SqlDataAdapter adpt = new SqlDataAdapter(); adpt.InsertCommand = command; // Specify the number of records to be Inserted/Updated in one go. Default is 1. adpt.UpdateBatchSize = 500000; connection.Open(); int recordsInserted = adpt.Update(dtInsertRows); connection.Close(); } private static DataTable GetDataTable() { // You First need a DataTable and have all the insert values in it DataTable dtInsertRows = new DataTable(); dtInsertRows.Columns.Add("NAME"); for (int i = 0; i < 500000; i++) { DataRow drInsertRow = dtInsertRows.NewRow(); string name = "Name : " + i; drInsertRow["NAME"] = name; dtInsertRows.Rows.Add(drInsertRow); } return dtInsertRows; } static void sbc_SqlRowsCopied(object sender, SqlRowsCopiedEventArgs e) { Console.WriteLine("Number of records affected : " + e.RowsCopied.ToString()); } } }

    Read the article

  • Our Look at the Internet Explorer 9 Platform Preview

    - by Asian Angel
    Have you been hearing all about Microsoft’s work on Internet Explorer 9 and are curious about it? If you are wanting a taste of the upcoming release then join us as we take a look at the Internet Explorer 9 Platform Preview. Note: Windows Vista and Server 2008 users may need to install a Platform Update (see link at bottom for more information). Getting Started If you are curious about the systems that the platform preview will operate on here is an excerpt from the FAQ page (link provided below). There are two important points of interest here: The platform preview does not replace your regular Internet Explorer installation The platform preview (and the final version of Internet Explorer 9) will not work on Windows XP There really is not a lot to the install process…basically all that you will have to deal with is the “EULA Window” and the “Install Finished Window”. Note: The platform preview will install to a “Program Files Folder” named “Internet Explorer Platform Preview”. Internet Explorer 9 Platform Preview in Action When you start the platform preview up for the first time you will be presented with the Internet Explorer 9 Test Drive homepage. Do not be surprised that there is not a lot to the UI at this time…but you can get a good idea of how Internet Explorer will act. Note: You will not be able to alter the “Homepage” for the platform preview. Of the four menus available there are two that will be of interest to most people…the “Page & Debug Menus”. If you go to navigate to a new webpage you will need to go through the “Page Menu” unless you have installed the Address Bar Mini-Tool (shown below). Want to see what a webpage will look like in an older version of Internet Explorer? Then choose your version in the “Debug Menu”. We did find it humorous that IE6 was excluded from the choices offered. Here is what the URL entry window looks like if you are using the “Page Menu” to navigate between websites. Here is the main page of the site here displayed in “IE9 Mode”…looking good. Here is the main page viewed in “Forced IE5 Document Mode”. There were some minor differences (colors, sidebar, etc.) in how the main page displayed in comparison to “IE9 Mode”. Being able to switch between modes makes for an interesting experience… As you can see there is not much to the “Context Menu” at the moment. Notice the slightly altered icon for the platform preview… “Add” an Address Bar of Sorts If you would like to use a “make-shift” Address Bar with the platform preview you can set up the portable file (IE9browser.exe) for the Internet Explorer 9 Test Platform Addressbar Mini-Tool. Just place it in an appropriate folder, create a shortcut for it, and it will be ready to go. Here is a close look at the left side of the Address Bar Mini-Tool. You can try to access “IE Favorites” but may have sporadic results like those we experienced during our tests. Note: The Address Bar Mini-Tool will not line up perfectly with the platform preview but still makes a nice addition. And a close look at the right side of the Address Bar Mini-Tool. In order to completely shut down the Address Bar Mini-Tool you will need to click on “Close”. Each time that you enter an address into the Address Bar Mini-Tool it will open a new window/instance of the platform preview. Note: During our tests we noticed that clicking on “Home” in the “Page Menu” opened the previously viewed website but once we closed and restarted the platform preview the test drive website was the starting/home page again. Even if the platform preview is not running the Address Bar Mini-Tool can still run as shown here. Note: You will not be able to move the Address Bar Mini-Tool from its’ locked-in position at the top of the screen. Now for some fun. With just the Address Bar Mini-Tool open you can enter an address and cause the platform preview to open. Here is our example from above now open in the platform preview…good to go. Conclusion During our tests we did experience the occasional crash but overall we were pleased with the platform preview’s performance. The platform preview handled rather well and definitely seemed much quicker than Internet Explorer 8 on our test system (a definite bonus!). If you are an early adopter then this could certainly get you in the mood for the upcoming beta releases! Links Download the Internet Explorer 9 Preview Platform Download the Internet Explorer 9 Test Platform Addressbar Mini-Tool Information about Platform Update for Windows Vista & Server 2008 View the Internet Explorer 9 Platform Preview FAQ Similar Articles Productive Geek Tips Mysticgeek Blog: A Look at Internet Explorer 8 Beta 1 on Windows XPMake Ctrl+Tab in Internet Explorer 7 Use Most Recent OrderRemove ISP Text or Corporate Branding from Internet Explorer Title BarWhy Can’t I Turn the Details/Preview Panes On or Off in Windows Vista Explorer?Prevent Firefox or Internet Explorer from Printing the URL on Every Page TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Awesome Lyrics Finder for Winamp & Windows Media Player Download Videos from Hulu Pixels invade Manhattan Convert PDF files to ePub to read on your iPad Hide Your Confidential Files Inside Images Get Wildlife Photography Tips at BBC’s PhotoMasterClasses

    Read the article

  • Developer’s Life – Summary of Superhero Articles

    - by Pinal Dave
    Earlier this year, I wrote an article series where I talked about developer’s life and compared it with Superhero. I have got amazing response to this series and I have been receiving quite a lots of email suggesting that I should write more blog post about them. Currently I am not planning to write more blog post but I will soon continue another series. In this blog post, I have summarized the entire series. Let me know if you want me to write about any superhero. I will see what I can do about that hero. Developer’s Life – Every Developer is a Captain America Captain America was first created as a comic book character in the 1940’s as a way to boost morale during World War II.  Aimed at a children’s audience, his legacy faded away when the war ended.  However, he has recently has a major reboot to become a popular movie character that deals with modern issues. Developer’s Life – Every Developer is the Incredible Hulk The Incredible Hulk is possibly one of the scariest superheroes out there.  All superheroes are meant to be “out of this world” and awe-inspiring, but I think most people will agree with I say The Hulk takes this to the next level.  He is the result of an industrial accident, which is scary enough in it’s own right.  Plus, when mild-mannered Bruce Banner is angered, he goes completely out-of-control and transforms into a destructive monster that he cannot control and has no memories of. Developer’s Life – Every Developer is a Wonder Woman We have focused a lot lately on this “superhero series.”  I love fantasy books and movies, and I feel like there is a lot to be learned from them.  As I am writing this series, though, I have noticed that every super hero I write about is a man.  So today, I would like to talk about the major female super hero – Wonder Woman. Developer’s Life – Every Developer is a Harry Potter Harry Potter might not be a superhero in the traditional sense, but I believe he still has a lot to teach us and show us about life as a developer.  If you have been living under a rock for the last 17 years, you might not know that Harry Potter is the main character in an extremely popular series of books and movies documenting the education and tribulation of a young wizard (and his friends). Developer’s Life – Every Developer is Like Transformers Transformers may not be superheroes – they don’t wear capes, they don’t have amazing powers outside of their size and folding ability, they’re not even human (technically).  Part of their enduring popularity is that while we are enjoying over-the-top movies, we are learning about good leadership and strong personal skills. Developer’s Life – Every Developer is a Iron Man Iron Man is another superhero who is not naturally “super,” but relies on his brain (and money) to turn him into a fighting machine.  While traditional superheroes are still popular, a three-movie franchise and incorporation into the new Avengers series shows that Iron Man is popular enough on his own. Developer’s Life – Every Developer is a Sherlock Holmes I have been thinking a lot about how developers are like super heroes, and I have written two blog posts now comparing them to Spiderman and Superman.  I have a lot of love and respect for developers, and I hope that they are enjoying these articles, and others are learning a little bit about the profession.  There is another fictional character who, while not technically asuper hero, is very powerful, and I also think stands as a good example of a developer. That character is Sherlock Holmes.  Sherlock Holmes is a British detective, first made popular at the turn of the 19thcentury by author Sir Arthur Conan Doyle.  The original Sherlock Holmes was a brilliant detective who could solve the most mind-boggling crime through simple observations and deduction. Developer’s Life – Every Developer is a Chhota Bheem Chhota Bheem is a cartoon character that is extremely popular where I live.  He is my daughter’s favorite characters.  I like to say that children love Chhota Bheem more than their parents – it is lucky for us he is not real!  Children love Chhota Bheem because he is the absolute “good guy.”  He is smart, loyal, and strong.  He and his friends live in Dholakpur and fight off their many enemies – and always win – in every episode.  In each episode, they learn something about friendship, bravery, and being kind to others.  Chhota Bheem is a good role model for children, and I think that he is a good role model for developers are well. Developer’s Life – Every Developer is a Batman Batman is one of the darkest superheroes in the fantasy canon.  He does not come to his powers through any sort of magical coincidence or radioactive insect, but through a lot of psychological scarring caused by witnessing the death of his parents.  Despite his dark back story, he possesses a lot of admirable abilities that I feel bear comparison to developers. Developer’s Life – Every Developer is a Superman I enjoyed comparing developers to Spiderman so much, that I have decided to continue the trend and encourage some of my favorite people (developers) with another favorite superhero – Superman.  Superman is probably the most famous superhero – and one of the most inspiring. Developer’s Life – Every Developer is a Spiderman I have to admit, Spiderman is my favorite superhero.  The most recent movie recently was released in theaters, so it has been at the front of my mind for some time. Spiderman was my favorite superhero even before the latest movie came out, but of course I took my whole family to see the movie as soon as I could!  Every one of us loved it, including my daughter.  We all left the movie thinking how great it would be to be Spiderman.  So, with that in mind, I started thinking about how we are like Spiderman in our everyday lives, especially developers. I would like to know which Superhero is your favorite hero! Reference: Pinal Dave (http://blog.SQLAuthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Developer, Superhero

    Read the article

  • Using C# 4.0’s DynamicObject as a Stored Procedure Wrapper

    - by EltonStoneman
    [Source: http://geekswithblogs.net/EltonStoneman] Overview Ignoring the fashion, I still make a lot of use of DALs – typically when inheriting a codebase with an established database schema which is full of tried and trusted stored procedures. In the DAL a collection of base classes have all the scaffolding, so the usual pattern is to create a wrapper class for each stored procedure, giving typesafe access to parameter values and output. DAL calls then looks like instantiate wrapper-populate parameters-execute call:       using (var sp = new uspGetManagerEmployees())     {         sp.ManagerID = 16;         using (var reader = sp.Execute())         {             //map entities from the output         }     }   Or rolling it all into a fluent DAL call – which is nicer to read and implicitly disposes the resources:   This is fine, the wrapper classes are very simple to handwrite or generate. But as the codebase grows, you end up with a proliferation of very small wrapper classes: The wrappers don't add much other than encapsulating the stored procedure call and giving you typesafety for the parameters. With the dynamic extension in .NET 4.0 you have the option to build a single wrapper class, and get rid of the one-to-one stored procedure to wrapper class mapping. In the dynamic version, the call looks like this:       dynamic getUser = new DynamicSqlStoredProcedure("uspGetManagerEmployees", Database.AdventureWorks);     getUser.ManagerID = 16;       var employees = Fluently.Load<List<Employee>>()                             .With<EmployeeMap>()                             .From(getUser);   The important difference is that the ManagerId property doesn't exist in the DynamicSqlStoredProcedure class. Declaring the getUser object with the dynamic keyword allows you to dynamically add properties, and the DynamicSqlStoredProcedure class intercepts when properties are added and builds them as stored procedure parameters. When getUser.ManagerId = 16 is executed, the base class adds a parameter call (using the convention that parameter name is the property name prefixed by "@"), specifying the correct SQL Server data type (mapping it from the type of the value the property is set to), and setting the parameter value. Code Sample This is worked through in a sample project on github – Dynamic Stored Procedure Sample – which also includes a static version of the wrapper for comparison. (I'll upload this to the MSDN Code Gallery once my account has been resurrected). Points worth noting are: DynamicSP.Data – database-independent DAL that has all the data plumbing code. DynamicSP.Data.SqlServer – SQL Server DAL, thin layer on top of the generic DAL which adds SQL Server specific classes. Includes the DynamicSqlStoredProcedure base class. DynamicSqlStoredProcedure.TrySetMember. Invoked when a dynamic member is added. Assumes the property is a parameter named after the SP parameter name and infers the SqlDbType from the framework type. Adds a parameter to the internal stored procedure wrapper and sets its value. uspGetManagerEmployees – the static version of the wrapper. uspGetManagerEmployeesTest – test fixture which shows usage of the static and dynamic stored procedure wrappers. The sample uses stored procedures from the AdventureWorks database in the SQL Server 2008 Sample Databases. Discussion For this scenario, the dynamic option is very favourable. Assuming your DAL is itself wrapped by a higher layer, the stored procedure wrapper classes have very little reuse. Even if you're codegening the classes and test fixtures, it's still additional effort for very little value. The main consideration with dynamic classes is that the compiler ignores all the members you use, and evaluation only happens at runtime. In this case where scope is strictly limited that's not an issue – but you're relying on automated tests rather than the compiler to find errors, but that should just encourage better test coverage. Also you can codegen the dynamic calls at a higher level. Performance may be a consideration, as there is a first-time-use overhead when the dynamic members of an object are bound. For a single run, the dynamic wrapper took 0.2 seconds longer than the static wrapper. The framework does a good job of caching the effort though, so for 1,000 calls the dynamc version still only takes 0.2 seconds longer than the static: You don't get IntelliSense on dynamic objects, even for the declared members of the base class, and if you've been using class names as keys for configuration settings, you'll lose that option if you move to dynamics. The approach may make code more difficult to read, as you can't navigate through dynamic members, but you do still get full debugging support.     var employees = Fluently.Load<List<Employee>>()                             .With<EmployeeMap>()                             .From<uspGetManagerEmployees>                             (                                 i => i.ManagerID = 16,                                 x => x.Execute()                             );

    Read the article

  • Use Advanced Font Ligatures in Office 2010

    - by Matthew Guay
    Fonts can help your documents stand out and be easier to read, and Office 2010 helps you take your fonts even further with support for OpenType ligatures, stylistic sets, and more.  Here’s a quick look at these new font features in Office 2010. Introduction Starting with Windows 7, Microsoft has made an effort to support more advanced font features across their products.  Windows 7 includes support for advanced OpenType font features and laid the groundwork for advanced font support in programs with the new DirectWrite subsystem.  It also includes the new font Gabriola, which includes an incredible number of beautiful stylistic sets and ligatures. Now, with the upcoming release of Office 2010, Microsoft is bringing advanced typographical features to the Office programs we love.  This includes support for OpenType ligatures, stylistic sets, number forms, contextual alternative characters, and more.  These new features are available in Word, Outlook, and Publisher 2010, and work the same on Windows XP, Vista and Windows 7. Please note that Windows does include several OpenType fonts that include these advanced features.  Calibri, Cambria, Constantia, and Corbel all include multiple number forms, while Consolas, Palatino Linotype, and Gabriola (Windows 7 only) include all the OpenType features.  And, of course, these new features will work great with any other OpenType fonts you have that contain advanced ligatures, stylistic sets, and number forms. Using advanced typography in Word To use the new font features, open a new document, select an OpenType font, and enter some text.  Here we have Word 2010 in Windows 7 with some random text in the Gabriola font.  Click the arrow on the bottom of the Font section of the ribbon to open the font properties. Alternately, select the text and click Font. Now, click on the Advanced tab to see the OpenType features. You can change the ligatures setting… Choose Proportional or Tabular number spacing… And even select Lining or Old-style number forms. Here’s a comparison of Lining and Old-style number forms in Word 2010 with the Calibri font. Finally, you can choose various Stylistic sets for your font.  The dialog always shows 20 styles, whether or not your font includes that many.  Most include only 1 or 2; Gabriola includes 6. Here’s lorem ipsum text, using the Gabriola font with Stylistic set 6. Impressive, huh?  The font ligatures change based on context, so they will automatically change as you are typing.  Watch the transition as we typed the word Microsoft in Word with Gabriola stylistic set 6. Here’s another example, showing the fi and tt ligatures in Calibri. These effects work great in Word 2010 in XP, too. And, since Outlook uses Word as it’s editing engine, you can use the same options in Outlook 2010.  Note that these font effects may not show up the same if the recipient’s email client doesn’t support advanced OpenType typography.  It will, of course, display perfectly if the recipient is using Outlook 2010. Using advanced typography in Publisher 2010 Publisher 2010 includes the same advanced font features.  This is especially nice for those using Publisher for professional layout and design.  Simply insert a text box, enter some text, select it, and click the arrow on the bottom of the font box as in Word to open the font properties. This font options dialog is actually more advanced than Word’s font options.  You can preview your font changes on sample text right in the properties box.  You can also choose to add or remove a swash from your characters.   Conclusion Advanced typographical effects are a welcome addition to Word and Publisher 2010, and they are very impressive when coupled with modern fonts such as Gabriola.  From designing elegant headers to using old-style numbers, these features are very useful and fun. Do you have a favorite OpenType font that includes advanced typographical features?  Let us know in the comments! More Reading Advances in typography in Windows 7 – Engineering 7 Blog New features in Microsoft Word 2010 Similar Articles Productive Geek Tips Change the Default Font in Excel 2007Ask the Readers: Do You Use a Laptop, Desktop, or Both?Keep Websites From Using Tiny Fonts in SafariAdd or Remove Apps from the Microsoft Office 2007 or 2010 SuiteFriday Fun: Desktop Tower Defense Pro TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional SpeedyFox Claims to Speed up your Firefox Beware Hover Kitties Test Drive Mobile Phones Online With TryPhone Ben & Jerry’s Free Cone Day, 3/23/10 New Stinger from McAfee Helps Remove ‘FakeAlert’ Threats Google Apps Marketplace: Tools & Services For Google Apps Users

    Read the article

  • To ORM or Not to ORM. That is the question&hellip;

    - by Patrick Liekhus
    UPDATE:  Thanks for the feedback and comments.  I have adjusted my table below with your recommendations.  I had missed a point or two. I wanted to do a series on creating an entire project using the EDMX XAF code generation and the SpecFlow BDD Easy Test tools discussed in my earlier posts, but I thought it would be appropriate to start with a simple comparison and reasoning on why I choose to use these tools. Let’s start by defining the term ORM, or Object-Relational Mapping.  According to Wikipedia it is defined as the following: Object-relational mapping (ORM, O/RM, and O/R mapping) in computer software is a programming technique for converting data between incompatible type systems in object-oriented programming languages. This creates, in effect, a "virtual object database" that can be used from within the programming language. Why should you care?  Basically it allows you to map your business objects in code to their persistence layer behind them. And better yet, why would you want to do this?  Let me outline it in the following points: Development speed.  No more need to map repetitive tasks query results to object members.  Once the map is created the code is rendered for you. Persistence portability.  The ORM knows how to map SQL specific syntax for the persistence engine you choose.  It does not matter if it is SQL Server, Oracle and another database of your choosing. Standard/Boilerplate code is simplified.  The basic CRUD operations are consistent and case use database metadata for basic operations. So how does this help?  Well, let’s compare some of the ORM tools that I have used and/or researched.  I have been interested in ORM for some time now.  My ORM of choice for a long time was NHibernate and I still believe it has a strong case in some business situations.  However, you have to take business considerations into account and the law of diminishing returns.  Because of these two factors, my recent activity and experience has been around DevExpress eXpress Persistence Objects (XPO).  The primary reason for this is because they have the DevExpress eXpress Application Framework (XAF) that sits on top of XPO.  With this added value, the data model can be created (either database first of code first) and the Web and Windows client can be created from these maps.  While out of the box they provide some simple list and detail screens, you can verify easily extend and modify these to your liking.  DevExpress has done a tremendous job of providing enough framework while also staying out of the way when you need to extend it.  This sounds worse than it really is.  What I mean by this is that if you choose to follow DevExpress coding style and recommendations, the hooks and extension points provided allow you to do some pretty heavy lifting while also not worrying about the basics. I have put together a list of the top features that I have used to compare the limited list of ORM’s that I have exposure with.  Again, the biggest selling point in my opinion is that XPO is just a solid as any of the other ORM’s but with the added layer of XAF they become unstoppable.  And then couple that with the EDMX modeling tools and code generation, it becomes a no brainer. Designer Features Entity Framework NHibernate Fluent w/ Nhibernate Telerik OpenAccess DevExpress XPO DevExpress XPO/XAF plus Liekhus Tools Uses XML to map relationships - Yes - - -   Visual class designer interface Yes - - - - Yes Management integrated w/ Visual Studio Yes - - Yes - Yes Supports schema first approach Yes - - Yes - Yes Supports model first approach Yes - - Yes Yes Yes Supports code first approach Yes Yes Yes Yes Yes Yes Attribute driven coding style Yes - Yes - Yes Yes                 I have a very small team and limited resources with a lot of responsibilities.  In order to keep up with our customers, we must rely on tools like these.  We use the EDMX tool so that we can create a visual representation of the applications with our customers.  Second, we rely on the code generation so that we can focus on the business problems at hand and not whether a field is mapped correctly.  This keeps us from requiring as many junior level developers on our team.  I have also worked on multiple teams where they believed in writing their own “framework”.  In my experiences and opinion this is not the route to take unless you have a team dedicated to supporting just the framework.  Each time that I have worked on custom frameworks, the framework eventually becomes old, out dated and full of “performance” enhancements specific to one or two requirements.  With an ORM, there are a lot smarter people than me working on the bigger issue of persistence and performance.  Again, my recommendation would be to use an available framework and get to working on your business domain problems.  If your coding is not making money for you, why are you working on it?  Do you really need to be writing query to object member code again and again? Thanks

    Read the article

  • Using LogParser - part 2

    - by fatherjack
    PersonAddress.csv SalesOrderDetail.tsv In part 1 of this series we downloaded and installed LogParser and used it to list data from a csv file. That was a good start and in this article we are going to see the different ways we can stream data and choose whether a whole file is selected. We are also going to take a brief look at what file types we can interrogate. If we take the query from part 1 and add a value for the output parameter as -o:datagrid so that the query becomes LOGPARSER "SELECT top 15 * FROM C:\LP\person_address.csv" -o:datagrid and run that we get a different result. A pop-up dialog that lets us view the results in a resizable grid. Notice that because we didn't specify the columns we wanted returned by LogParser (we used SELECT *) is has added two columns to the recordset - filename and rownumber. This behaviour can be very useful as we will see in future parts of this series. You can click Next 10 rows or All rows or close the datagrid once you are finished reviewing the data. You may have noticed that the files that I am working with are different file types - one is a csv (comma separated values) and the other is a tsv (tab separated values). If you want to convert a file from one to another then LogParser makes it incredibly simple. Rather than using 'datagrid' as the value for the output parameter, use 'csv': logparser "SELECT SalesOrderID, SalesOrderDetailID, CarrierTrackingNumber, OrderQty, ProductID, SpecialOfferID, UnitPrice, UnitPriceDiscount, LineTotal, rowguid, ModifiedDate into C:\Sales_SalesOrderDetail.csv FROM C:\Sales_SalesOrderDetail.tsv" -i:tsv -o:csv Those familiar with SQL will not have to make a very big leap of faith to making adjustments to the above query to filter in/out records from the source file. Lets get all the records from the same file where the Order Quantity (OrderQty) is more than 25: logparser "SELECT SalesOrderID, SalesOrderDetailID, CarrierTrackingNumber, OrderQty, ProductID, SpecialOfferID, UnitPrice, UnitPriceDiscount, LineTotal, rowguid, ModifiedDate into C:\LP\Sales_SalesOrderDetailOver25.csv FROM C:\LP\Sales_SalesOrderDetail.tsv WHERE orderqty > 25" -i:tsv -o:csv Or we could find all those records where the Order Quantity is equal to 25 and output it to an xml file: logparser "SELECT SalesOrderID, SalesOrderDetailID, CarrierTrackingNumber, OrderQty, ProductID, SpecialOfferID, UnitPrice, UnitPriceDiscount, LineTotal, rowguid, ModifiedDate into C:\LP\Sales_SalesOrderDetailEq25.xml FROM C:\LP\Sales_SalesOrderDetail.tsv WHERE orderqty = 25" -i:tsv -o:xml All the standard comparison operators are to be found in LogParser; >, <, =, LIKE, BETWEEN, OR, NOT, AND. Input and Output file formats. LogParser has a pretty impressive list of file formats that it can parse and a good selection of output formats that will let you generate output in a format that is useable for whatever process or application you may be using. From any of these To any of these IISW3C: parses IIS log files in the W3C Extended Log File Format.   NAT: formats output records as readable tabulated columns. IIS: parses IIS log files in the Microsoft IIS Log File Format. CSV: formats output records as comma-separated values text. BIN: parses IIS log files in the Centralized Binary Log File Format. TSV: formats output records as tab-separated or space-separated values text. IISODBC: returns database records from the tables logged to by IIS when configured to log in the ODBC Log Format. XML: formats output records as XML documents. HTTPERR: parses HTTP error log files generated by Http.sys. W3C: formats output records in the W3C Extended Log File Format. URLSCAN: parses log files generated by the URLScan IIS filter. TPL: formats output records following user-defined templates. CSV: parses comma-separated values text files. IIS: formats output records in the Microsoft IIS Log File Format. TSV: parses tab-separated and space-separated values text files. SQL: uploads output records to a table in a SQL database. XML: parses XML text files. SYSLOG: sends output records to a Syslog server. W3C: parses text files in the W3C Extended Log File Format. DATAGRID: displays output records in a graphical user interface. NCSA: parses web server log files in the NCSA Common, Combined, and Extended Log File Formats. CHART: creates image files containing charts. TEXTLINE: returns lines from generic text files. TEXTWORD: returns words from generic text files. EVT: returns events from the Windows Event Log and from Event Log backup files (.evt files). FS: returns information on files and directories. REG: returns information on registry values. ADS: returns information on Active Directory objects. NETMON: parses network capture files created by NetMon. ETW: parses Enterprise Tracing for Windows trace log files and live sessions. COM: provides an interface to Custom Input Format COM Plugins. So, you can query data from any of the types on the left and really easily get it into a format where it is ready for analysis by other tools. To a DBA or network Administrator with an enquiring mind this is a treasure trove. In part 3 we will look at working with multiple sources and specifically outputting to SQL format. See you there!

    Read the article

  • I Didn&rsquo;t Get You Anything&hellip;

    - by Bob Rhubart
    Nearly every day this blog features a  list posts and articles written by members of the OTN architect community. But with Christmas just days away, I thought a break in that routine was in order. After all, if the holidays aren’t excuse enough for an off-topic post, then the terrorists have won. Rather than buy gifts for everyone -- which, given the readership of this blog and my budget could amount to a cash outlay of upwards of $15.00 – I thought I’d share a bit of holiday humor. I wrote the following essay back in the mid-90s, for a “print” publication that used “paper” as a content delivery system.  That was then. I’m older now, my kids are older, but my feelings toward the holidays haven’t changed… It’s New, It’s Improved, It’s Christmas! The holidays are a time of rituals. Some of these, like the shopping, the music, the decorations, and the food, are comforting in their predictability. Other rituals, like the shopping, the  music, the decorations, and the food, can leave you curled into the fetal position in some dark corner, whimpering. How you react to these various rituals depends a lot on your general disposition and credit card balance. I, for one, love Christmas. But there is one Christmas ritual that really tangles my tinsel: the seasonal editorializing about how our modern celebration of the holidays pales in comparison to that of Christmas past. It's not that the old notions of how to celebrate the holidays aren't all cozy and romantic--you can't watch marathon broadcasts of "It's A Wonderful White Christmas Carol On Thirty-Fourth Street Story" without a nostalgic teardrop or two falling onto your plate of Christmas nachos. It's just that the loudest cheerleaders for "old-fashioned" holiday celebrations overlook the fact that way-back-when those people didn't have the option of doing it any other way. Dashing through the snow in a one-horse open sleigh? No thanks. When Christmas morning rolls around, I'm going to be mighty grateful that the family is going to hop into a nice warm Toyota for the ride over to grandma's place. I figure a horse-drawn sleigh is big fun for maybe fifteen minutes. After that you’re going to want Old Dobbin to haul ass back to someplace warm where the egg nog is spiked and the family can gather in the flickering glow of a giant TV and contemplate the true meaning of football. Chestnuts roasting on an open fire? Sorry, no fireplace. We've got a furnace for heat, and stuffing nuts in there voids the warranty. Any of the roasting we do these days is in the microwave, and I'm pretty sure that if you put chestnuts in the microwave they would become little yuletide hand grenades. Although, if you've got a snoot full of Yule grog, watching chestnuts explode in your microwave might be a real holiday hoot. Some people may see microwave ovens as a symptom of creeping non-traditional holiday-ism. But I'll bet you that if there were microwave ovens around in Charles Dickens' day, the Cratchits wouldn't have had to entertain an uncharacteristically giddy Scrooge for six or seven hours while the goose cooked. Holiday entertaining is, in fact, the one area that even the most severe critic of modern practices would have to admit has not changed since Tim was Tiny. A good holiday celebration, then as now, involves lots of food, free-flowing drink, and a gathering of friends and family, some of whom you are about as happy to see as a subpoena. Just as the Cratchit's Christmas was spent with a man who, for all they knew, had suffered some kind of head trauma, so the modern holiday gathering includes relatives or acquaintances who, because they watch too many talk shows, and/or have poor personal hygiene, and/or fail to maintain scheduled medication, you would normally avoid like a plate of frosted botulism. But in the season of good will towards men, you smile warmly at the mystery uncle wandering around half-crocked with a clump of mistletoe dangling from the bill of his N.R.A. cap. Dickens' story wouldn't have become the holiday classic it has if, having spotted on their doorstep an insanely grinning, raw poultry-bearing, fresh-off-a-rough-night Scrooge, the Cratchits had pulled their shades and pretended not to be home. Which is probably what I would have done. Instead, knowing full well his reputation as a career grouch, they welcomed him into their home, and we have a touching story that teaches a valuable lesson about how the Christmas spirit can get the boss to pump up the payroll. Despite what the critics might say, our modern Christmas isn't all that different from those of long ago. Sure, the technology has changed, but that just means a bigger, brighter, louder Christmas, with lasers and holograms and stuff. It's our modern celebration of a season that even the least spiritual among us recognizes as a time of hope that the nutcases of the world will wake up and realize that peace on earth is a win/win proposition for everybody. If Christmas has changed, it's for the better. We should continue making Christmas bigger and louder and shinier until everybody gets it.  *** Happy Holidays, everyone!   del.icio.us Tags: holiday,humor Technorati Tags: holiday,humor

    Read the article

  • Sun Fire X4800 M2 Delivers World Record TPC-C for x86 Systems

    - by Brian
    Oracle's Sun Fire X4800 M2 server equipped with eight 2.4 GHz Intel Xeon Processor E7-8870 chips obtained a result of 5,055,888 tpmC on the TPC-C benchmark. This result is a world record for x86 servers. Oracle demonstrated this world record database performance running Oracle Database 11g Release 2 Enterprise Edition with Partitioning. The Sun Fire X4800 M2 server delivered a new x86 TPC-C world record of 5,055,888 tpmC with a price performance of $0.89/tpmC using Oracle Database 11g Release 2. This configuration is available 06/26/12. The Sun Fire X4800 M2 server delivers 3.0x times better performance than the next 8-processor result, an IBM System p 570 equipped with POWER6 processors. The Sun Fire X4800 M2 server has 3.1x times better price/performance than the 8-processor 4.7GHz POWER6 IBM System p 570. The Sun Fire X4800 M2 server has 1.6x times better performance than the 4-processor IBM x3850 X5 system equipped with Intel Xeon processors. This is the first TPC-C result on any system using eight Intel Xeon Processor E7-8800 Series chips. The Sun Fire X4800 M2 server is the first x86 system to get over 5 million tpmC. The Oracle solution utilized Oracle Linux operating system and Oracle Database 11g Enterprise Edition Release 2 with Partitioning to produce the x86 world record TPC-C benchmark performance. Performance Landscape Select TPC-C results (sorted by tpmC, bigger is better) System p/c/t tpmC Price/tpmC Avail Database MemorySize Sun Fire X4800 M2 8/80/160 5,055,888 0.89 USD 6/26/2012 Oracle 11g R2 4 TB IBM x3850 X5 4/40/80 3,014,684 0.59 USD 7/11/2011 DB2 ESE 9.7 3 TB IBM x3850 X5 4/32/64 2,308,099 0.60 USD 5/20/2011 DB2 ESE 9.7 1.5 TB IBM System p 570 8/16/32 1,616,162 3.54 USD 11/21/2007 DB2 9.0 2 TB p/c/t - processors, cores, threads Avail - availability date Oracle and IBM TPC-C Response times System tpmC Response Time (sec) New Order 90th% Response Time (sec) New Order Average Sun Fire X4800 M2 5,055,888 0.210 0.166 IBM x3850 X5 3,014,684 0.500 0.272 Ratios - Oracle Better 1.6x 1.4x 1.3x Oracle uses average new order response time for comparison between Oracle and IBM. Graphs of Oracle's and IBM's response times for New-Order can be found in the full disclosure reports on TPC's website TPC-C Official Result Page. Configuration Summary and Results Hardware Configuration: Server Sun Fire X4800 M2 server 8 x 2.4 GHz Intel Xeon Processor E7-8870 4 TB memory 8 x 300 GB 10K RPM SAS internal disks 8 x Dual port 8 Gbs FC HBA Data Storage 10 x Sun Fire X4270 M2 servers configured as COMSTAR heads, each with 1 x 3.06 GHz Intel Xeon X5675 processor 8 GB memory 10 x 2 TB 7.2K RPM 3.5" SAS disks 2 x Sun Storage F5100 Flash Array storage (1.92 TB each) 1 x Brocade 5300 switches Redo Storage 2 x Sun Fire X4270 M2 servers configured as COMSTAR heads, each with 1 x 3.06 GHz Intel Xeon X5675 processor 8 GB memory 11 x 2 TB 7.2K RPM 3.5" SAS disks Clients 8 x Sun Fire X4170 M2 servers, each with 2 x 3.06 GHz Intel Xeon X5675 processors 48 GB memory 2 x 300 GB 10K RPM SAS disks Software Configuration: Oracle Linux (Sun Fire 4800 M2) Oracle Solaris 11 Express (COMSTAR for Sun Fire X4270 M2) Oracle Solaris 10 9/10 (Sun Fire X4170 M2) Oracle Database 11g Release 2 Enterprise Edition with Partitioning Oracle iPlanet Web Server 7.0 U5 Tuxedo CFS-R Tier 1 Results: System: Sun Fire X4800 M2 tpmC: 5,055,888 Price/tpmC: 0.89 USD Available: 6/26/2012 Database: Oracle Database 11g Cluster: no New Order Average Response: 0.166 seconds Benchmark Description TPC-C is an OLTP system benchmark. It simulates a complete environment where a population of terminal operators executes transactions against a database. The benchmark is centered around the principal activities (transactions) of an order-entry environment. These transactions include entering and delivering orders, recording payments, checking the status of orders, and monitoring the level of stock at the warehouses. Key Points and Best Practices Oracle Database 11g Release 2 Enterprise Edition with Partitioning scales easily to this high level of performance. COMSTAR (Common Multiprotocol SCSI Target) is the software framework that enables an Oracle Solaris host to serve as a SCSI Target platform. COMSTAR uses a modular approach to break the huge task of handling all the different pieces in a SCSI target subsystem into independent functional modules which are glued together by the SCSI Target Mode Framework (STMF). The modules implementing functionality at SCSI level (disk, tape, medium changer etc.) are not required to know about the underlying transport. And the modules implementing the transport protocol (FC, iSCSI, etc.) are not aware of the SCSI-level functionality of the packets they are transporting. The framework hides the details of allocation providing execution context and cleanup of SCSI commands and associated resources and simplifies the task of writing the SCSI or transport modules. Oracle iPlanet Web Server middleware is used for the client tier of the benchmark. Each web server instance supports more than a quarter-million users while satisfying the response time requirement from the TPC-C benchmark. See Also Oracle Press Release -- Sun Fire X4800 M2 TPC-C Executive Summary tpc.org Complete Sun Fire X4800 M2 TPC-C Full Disclosure Report tpc.org Transaction Processing Performance Council (TPC) Home Page Ideas International Benchmark Page Sun Fire X4800 M2 Server oracle.com OTN Oracle Linux oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Sun Storage F5100 Flash Array oracle.com OTN Disclosure Statement TPC Benchmark C, tpmC, and TPC-C are trademarks of the Transaction Processing Performance Council (TPC). Sun Fire X4800 M2 (8/80/160) with Oracle Database 11g Release 2 Enterprise Edition with Partitioning, 5,055,888 tpmC, $0.89 USD/tpmC, available 6/26/2012. IBM x3850 X5 (4/40/80) with DB2 ESE 9.7, 3,014,684 tpmC, $0.59 USD/tpmC, available 7/11/2011. IBM x3850 X5 (4/32/64) with DB2 ESE 9.7, 2,308,099 tpmC, $0.60 USD/tpmC, available 5/20/2011. IBM System p 570 (8/16/32) with DB2 9.0, 1,616,162 tpmC, $3.54 USD/tpmC, available 11/21/2007. Source: http://www.tpc.org/tpcc, results as of 7/15/2011.

    Read the article

< Previous Page | 328 329 330 331 332 333 334 335 336 337 338 339  | Next Page >