Search Results

Search found 8603 results on 345 pages for 'altering tables'.

Page 320/345 | < Previous Page | 316 317 318 319 320 321 322 323 324 325 326 327  | Next Page >

  • Implement Tree/Details With Taskflow Regions Using EJB

    - by Deepak Siddappa
    This article describes on Display Tree/Details using taskflow regions.Use Case DescriptionLet us take scenario where we need to display Tree/Details, left region contains category hierarchy with items listed in a tree structure (ex:- Region-Countries-Locations-Departments in tree format) and right region contains the Employees list.In detail, Here User may drills down through categories using a tree until Employees are listed. Clicking the tree node name displays Employee list in the adjacent pane related to particular tree node. Implementation StepsThe script for creating the tables and inserting the data required for this application CreateSchema.sql Lets create a Java EE Web Application with Entities based on Regions, Countries, Locations, Departments and Employees table. Create a Stateless Session Bean and data control for the Stateless Session Bean. Add the below code to the session bean and expose the method in local/remote interface and generate a data control for that.Note:- Here in the below code "em" is a EntityManager. public List<Employees> empFilteredByTreeNode(String treeNodeType, String paramValue) { String queryString = null; try { if (treeNodeType == "null") { queryString = "select * from Employees emp ORDER BY emp.employee_id ASC"; } else if (Pattern.matches("[a-zA-Z]+[_]+[a-zA-Z]+[_]+[[0-9]+]+", treeNodeType)) { queryString = "select * from employees emp INNER JOIN departments dept\n" + "ON emp.department_id = dept.department_id JOIN locations loc\n" + "ON dept.location_id = loc.location_id JOIN countries cont\n" + "ON loc.country_id = cont.country_id JOIN regions reg\n" + "ON cont.region_id = reg.region_id and reg.region_name = '" + paramValue + "' ORDER BY emp.employee_id ASC"; } else if (treeNodeType.contains("regionsFindAll_bc_countriesList_1")) { queryString = "select * from employees emp INNER JOIN departments dept \n" + "ON emp.department_id = dept.department_id JOIN locations loc \n" + "ON dept.location_id = loc.location_id JOIN countries cont \n" + "ON loc.country_id = cont.country_id and cont.country_name = '" + paramValue + "' ORDER BY emp.employee_id ASC"; } else if (treeNodeType.contains("regionsFindAll_bc_locationsList_1")) { queryString = "select * from employees emp INNER JOIN departments dept ON emp.department_id = dept.department_id JOIN locations loc ON dept.location_id = loc.location_id and loc.city = '" + paramValue + "' ORDER BY emp.employee_id ASC"; } else if (treeNodeType.trim().contains("regionsFindAll_bc_departmentsList_1")) { queryString = "select * from Employees emp INNER JOIN Departments dept ON emp.DEPARTMENT_ID = dept.DEPARTMENT_ID and dept.DEPARTMENT_NAME = '" + paramValue + "'"; } } catch (NullPointerException e) { System.out.println(e.getMessage()); } return em.createNativeQuery(queryString, Employees.class).getResultList(); } In the ViewController project, create two ADF taskflow with page Fragments and name them as FirstTaskflow and SecondTaskflow respectively. Open FirstTaskflow,from component palette drop view(Page Fragment) name it as TreeList.jsff. Open SeconfTaskflow, from component palette drop view(Page Fragment) name it as EmpList.jsff and create two paramters in its overview parameters tab as shown in below image. Open TreeList.jsff , from data control palette drop regionsFindAll->Tree as ADF Tree. In Edit Tree Binding dialog, for Tree Level Rules select the display attributes as follows:-model.Regions - regionNamemodel.Countries - countryNamemodel.Locations - citymodel.Departments - departmentName In structure panel, click on af:Tree - t1 and select selectionListener with edit property. Create a "TreeBean" managed bean with scope as "session" as shown in below Image. Create new method as getTreeNodeSelectedValue and click ok. Open TreeBean managed bean and add the below code: private String treeNodeType; private String paramValue; public void getTreeNodeSelectedValue(SelectionEvent selectionEvent) { RichTree tree = (RichTree)selectionEvent.getSource(); RowKeySet addedSet = selectionEvent.getAddedSet(); Iterator i = addedSet.iterator(); TreeModel model = (TreeModel)tree.getValue(); model.setRowKey(i.next()); JUCtrlHierNodeBinding node = (JUCtrlHierNodeBinding)tree.getRowData(); //oracle.jbo.Row Row rw = node.getRow(); Object selectedTreeNode = node.getAttribute(0); Object treeListType = node.getBindings(); String treeNodeType = treeListType.toString(); this.setParamValue(selectedTreeNode.toString()); this.setTreeNodeType(treeNodeType); } public void setTreeNodeType(String treeNodeType) { this.treeNodeType = treeNodeType; } public String getTreeNodeType() { return treeNodeType; } public void setParamValue(String paramValue) { this.paramValue = paramValue; } public String getParamValue() { return paramValue; }<br /> Open EmpList.jsff , from data control palette drop empFilteredByTreeNode->Employees->Table as ADF Read-only Table. After selecting the  Employees result set, in Edit Action Binding dialog window pass the pageFlowScope parameters as shown in below Image. In empList.jsff page, click Binding tab and click on Create Executable binding and select Invoke action and follow as shown in below image. Edit executeEmpFiltered invoke action properties and set the Refresh to ifNeeded, So when ever the page needs the method will be executed. Create Main.jspx page with page template as Oracle Three Column Layout. Drop FirstTaskflow as Region in start facet and drop SecondTaskflow as Region in center facet, Edit task Flow Binding dialog window pass the Input Paramters as shown in below Image. Run the Main.jspx, tree will be displayed in left region and emp details will displyaed on the right region. Click on the Americas in tree node, all emp related to the Americas related will be displayed. Click on Americas->United States of America->South San Francisco->Accounting, only employee belongs to the Accounting department will be displayed.

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #004

    - by pinaldave
    Here is the list of curetted articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2006 Auto Generate Script to Delete Deprecated Fields in Current Database In early career everytime I have to drop a column, I had hard time doing it because I was scared what if that column was needed somewhere in the code. Due to this fear I never dropped any column. I just renamed the column. If the column which I renamed was needed afterwards it was very easy to rename it back again. However, it is not recommended to keep the deleted column renamed in the database. At every interval I used to drop the columns which was prefixed with specific word. This script is 6 years old but still works. Give it a look, I am open for improvements. 2007 Shrinking Truncate Log File – Log Full – Part 2 Shrinking database or mdf file is indeed bad thing and it creates lots of problems. However, once in a while there is legit requirement to shrink the log file – a very rare one. In the rare occasion shrinking or truncating the log file may be the only solution. However, one should make sure to take backup before and after the truncate or shrink as in case of a disaster they can be very useful. Remember that truncating log file will break the log chain and while restore it can create major issue. Anyway, use this feature with caution. 2008 Simple Use of Cursor to Print All Stored Procedures of Database Including Schema This is a very interesting requirement I used to face in my early career days, I needed to print all the Stored procedures of my database. Interesting enough I had written a cursor to do so. Today when I look back at this stored procedure, I believe there will be a much cleaner way to do the same task, however, I still use this SP quite often when I have to document all the stored procedures of my database. Interesting Observation about Order of Resultset without ORDER BY In industry many developers avoid using ORDER BY clause to display the result in particular order thinking that Index is enforcing the order. In this interesting example, I demonstrate that without using ORDER BY, same table and similar query can return different results. Query optimizer always returns results using any method which is optimized for performance. The learning is There is no order unless ORDER BY is used. 2009 Size of Index Table – A Puzzle to Find Index Size for Each Index on Table I asked this puzzle earlier where I asked how to find the Index size for each of the tables. The puzzle was very well received and lots of interesting answers were received. To answer this question I have written following blog posts. I suggest this weekend you try to solve this problem and see if you can come up with a better solution. If not, well here are the solutions. Solution 1 | Solution 2 | Solution 3 Understanding Table Hints with Examples Hints are options and strong suggestions specified for enforcement by the SQL Server query processor on DML statements. The hints override any execution plan the query optimizer might select for a query. The SQL Server Query optimizer is a very smart tool and it makes a better selection of execution plan. Suggesting hints to the Query Optimizer should be attempted when absolutely necessary and by experienced developers who know exactly what they are doing (or in development as a way to experiment and learn). Interesting Observation – TOP 100 PERCENT and ORDER BY I have seen developers and DBAs using TOP very causally when they have to use the ORDER BY clause. Theoretically, there is no need of ORDER BY in the view at all. All the ordering should be done outside the view and view should just have the SELECT statement in it. It was quite common that to save this extra typing by including ordering inside of the view. At several instances developers want a complete resultset and for the same they include TOP 100 PERCENT along with ORDER BY, assuming that this will simulate the SELECT statement with ORDER BY. 2010 SQLPASS Nov 8-11, 2010-Seattle – An Alternative Look at Experience In year 2010 I attended most prestigious SQL Server event SQLPASS between Nov 8-11, 2010 at Seattle. I have only one expression for the event - Best Summit Ever. Instead of writing about my usual routine or the event, I wrote about the interesting things I did and how I felt about it! When I go back and read it, I feel that this is the best event I attended in year 2010. Change Database Access to Single User Mode Using SSMS Image says all. 2011 SQL Server 2012 has introduced new analytic functions. These functions were long awaited and I am glad that they are now here. Before when any of this function was needed, people used to write long T-SQL code to simulate these functions. But now there’s no need of doing so. Having available native function also helps performance as well readability. Function SQLAuthority MSDN CUME_DIST CUME_DIST CUME_DIST FIRST_VALUE FIRST_VALUE FIRST_VALUE LAST_VALUE LAST_VALUE LAST_VALUE LEAD LEAD LEAD LAG LAG LAG PERCENTILE_CONT PERCENTILE_CONT PERCENTILE_CONT PERCENTILE_DISC PERCENTILE_DISC PERCENTILE_DISC PERCENT_RANK PERCENT_RANK PERCENT_RANK Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • How-to populate different select list content per table row

    - by frank.nimphius
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} A frequent requirement posted on the OTN forum is to render cells of a table column using instances of af:selectOneChoices with each af:selectOneChoice instance showing different list values. To implement this use case, the select list of the table column is populated dynamically from a managed bean for each row. The table's current rendered row object is accessible in the managed bean using the #{row} expression, where "row" is the value added to the table's var property. <af:table var="row">   ...   <af:column ...>     <af:selectOneChoice ...>         <f:selectItems value="#{browseBean.items}"/>     </af:selectOneChoice>   </af:column </af:table> The browseBean managed bean referenced in the code snippet above has a setItems and getItems method defined that is accessible from EL using the #{browseBean.items} expression. When the table renders, then the var property variable - the #{row} reference - is filled with the data object displayed in the current rendered table row. The managed bean getItems method returns a List<SelectItem>, which is the model format expected by the f:selectItems tag to populate the af:selectOneChoice list. public void setItems(ArrayList<SelectItem> items) {} //this method is executed for each table row public ArrayList<SelectItem> getItems() {   FacesContext fctx = FacesContext.getCurrentInstance();   ELContext elctx = fctx.getELContext();   ExpressionFactory efactory =          fctx.getApplication().getExpressionFactory();          ValueExpression ve =          efactory.createValueExpression(elctx, "#{row}", Object.class);      Row rw = (Row) ve.getValue(elctx);         //use one of the row attributes to determine which list to query and   //show in the current af:selectOneChoice list  // ...  ArrayList<SelectItem> alsi = new ArrayList<SelectItem>();  for( ... ){      SelectItem item = new SelectItem();        item.setLabel(...);        item.setValue(...);        alsi.add(item);   }   return alsi;} For better performance, the ADF Faces table stamps it data rows. Stamping means that the cell renderer component - af:selectOneChoice in this example - is instantiated once for the column and then repeatedly used to display the cell data for individual table rows. This however means that you cannot refresh a single select one choice component in a table to change its list values. Instead the whole table needs to be refreshed, rerunning the managed bean list query. Be aware that having individual list values per table row is an expensive operation that should be used only on small tables for Business Services with low latency data fetching (e.g. ADF Business Components and EJB) and with server side caching strategies for the queried data (e.g. storing queried list data in a managed bean in session scope).

    Read the article

  • Moving DataSets through BizTalk

    - by EltonStoneman
    [Source: http://geekswithblogs.net/EltonStoneman] Yuck. But sometimes you have to, so here are a couple of things to bear in mind: Schemas Point a codegen tool at a WCF endpoint which exposes a DataSet and it will generate an XSD which describes the DataSet like this: <xs:elementminOccurs="0"name="GetDataSetResult"nillable="true">  <xs:complexType>     <xs:annotation>       <xs:appinfo>         <ActualTypeName="DataSet"                     Namespace="http://schemas.datacontract.org/2004/07/System.Data"                     xmlns="http://schemas.microsoft.com/2003/10/Serialization/" />       </xs:appinfo>     </xs:annotation>     <xs:sequence>       <xs:elementref="xs:schema" />       <xs:any />     </xs:sequence>  </xs:complexType> </xs:element>  In a serialized instance, the element of type xs:schema contains a full schema which describes the structure of the DataSet – tables, columns etc. The second element, of type xs:any, contains the actual content of the DataSet, expressed as DiffGrams: <GetDataSetResult>  <xs:schemaid="NewDataSet"xmlns:xs="http://www.w3.org/2001/XMLSchema"xmlns=""xmlns:msdata="urn:schemas-microsoft-com:xml-msdata">     <xs:elementname="NewDataSet"msdata:IsDataSet="true"msdata:UseCurrentLocale="true">       <xs:complexType>         <xs:choiceminOccurs="0"maxOccurs="unbounded">           <xs:elementname="Table1">             <xs:complexType>               <xs:sequence>                 <xs:elementname="Id"type="xs:string"minOccurs="0" />                 <xs:elementname="Name"type="xs:string"minOccurs="0" />                 <xs:elementname="Date"type="xs:string"minOccurs="0" />               </xs:sequence>             </xs:complexType>           </xs:element>         </xs:choice>       </xs:complexType>     </xs:element>  </xs:schema>  <diffgr:diffgramxmlns:diffgr="urn:schemas-microsoft-com:xml-diffgram-v1"xmlns:msdata="urn:schemas-microsoft-com:xml-msdata">     <NewDataSetxmlns="">       <Table1diffgr:id="Table11"msdata:rowOrder="0"diffgr:hasChanges="inserted">         <Id>377fdf8d-cfd1-4975-a167-2ddb41265def</Id>         <Name>157bc287-f09b-435f-a81f-2a3b23aff8c4</Name>         <Date>a5d78d83-6c9a-46ca-8277-f2be8d4658bf</Date>       </Table1>     </NewDataSet>  </diffgr:diffgram> </GetDataSetResult> Put the XSD into a BizTalk schema and it will fail to compile, giving you error: The 'http://www.w3.org/2001/XMLSchema:schema' element is not declared. You should be able to work around that, but I've had no luck in BizTalk Server 2006 R2 – instead you can safely change that xs:schema element to be another xs:any type: <xs:elementminOccurs="0"name="GetDataSetResult"nillable="true">  <xs:complexType>     <xs:sequence>       <xs:any />       <xs:any />     </xs:sequence>  </xs:complexType> </xs:element>  (This snippet omits the annotation, but you can leave it in the schema). For an XML instance to pass validation through the schema, you'll also need to flag the any attributes so they can contain any namespace and skip validation:  <xs:elementminOccurs="0"name="GetDataSetResult"nillable="true">  <xs:complexType>     <xs:sequence>       <xs:anynamespace="##any"processContents="skip" />       <xs:anynamespace="##any"processContents="skip" />     </xs:sequence>  </xs:complexType> </xs:element>  You should now have a compiling schema which can be successfully tested against a serialised DataSet. Transforms If you're mapping a DataSet element between schemas, you'll need to use the Mass Copy Functoid to populate the target node from the contents of both the xs:any type elements on the source node: This should give you a compiled map which you can test against a serialized instance. And if you have a .NET consumer on the other side of the mapped BizTalk output, it will correctly deserialize the response into a DataSet.

    Read the article

  • Specs, Form and Function – What am I Missing?

    - by Barry Shulam
    0 0 1 628 3586 08041 29 8 4206 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Friday October 26th the Microsoft Surface RT arrived at the office.  I was summoned to my boss’s office for the grand unpacking.  If I had planned ahead I could have used my iPhone 4 to film the event and post it on YouTube however the desire to hold the device and turn it ON was more inviting than becoming a proxy reviewer for Engadget’s website.  1980 was the first time we had a personal computer in our house.  It was a  Kaypro computer. It weighed 29 pounds more than any persons lap could hold.  Then the term “portable computer” meant you could remove it from the building and take it else where.  Today I am typing on this entry on a Macbook Air which weighs 2.38 pounds. This morning Amazons front page main title is: “Much More for Much Less” I was born at the right time to start with the CPM operating system on the Kaypro thru the DOS, Windows, Linux, Mac OSX and mobile phone operating systems and languages.  If you are not aware Technology is moving at a rapid pace.  The New iPad (those who are keeping score – iPad4) is replacing a 7 month old machine the New iPad (iPad 3) I have used and owned many technology devices in my life.  The main point that most of the reader who are in the USA overlook is the fact that we are in the USA.  The devices we purchase have a great digital garden to support them.  The Kaypro computer had a 7-inch screen.  It was a TV tube with two colors – Black and Green.  You could see the 80-column screen flicker with characters – have you every played Pac-Man emulated on the screen with the ABC characters. Traveling across the world you will find that not all apps on your device will function as they did back home because they are not offered outside of your country of origin. I think the main question a buyer of technology should be asking is Function.  The greatest Specs with out function limit you.  The most beautiful form with out function is the same as a crystal vase on your shelf – not a good cereal bowl in the morning. Microsoft Surface RT, Amazon Kindle Fire and Apple iPad all great devices in their respective customers hands. My advice for those looking to purchase on this year:  If the device is your only technology device you buy what you WANT and LIKE. Consider this parallel universe if its not your only device?  Ever go shopping for clothing, shoes, and accessories with your wife, girlfriend, sister or mother?  If you listen carefully you will hear the little voices coming out of there heads saying:  “This goes well with that and I can use it also with that outfit” ”Do you think this clashes with that?”  “Ohh I love how that combination looks on you”.  Portable devices such as tablets and computers can offer a whole lot more when they are combined with the digital echo system you have at home and the manufacturer offers online. Pros of each Device: Microsoft Surface RT: There is a new functionality named SmartGlass which will let you share the content off your tablet to your XBOX 360.  Microsoft office is loaded on the tablet.  You can have more than one user profile on the tablet if you share it with others.   Amazon Kindle or Kindle HD: If you are an Amazon consumer with an annual Amazon Prime service you can consume videos and read books off the Amazon site.  Its the cheapest device.  Its a step up from the kindle reader in many ways.   Apple Ipad or Ipad mini: Over 270 Thousand applications.  Airplay permits you the ability to share to your TV screen. If you are a cord cutter (a person who gets their entertainment content over the web or air vs Cable Providers) the Airplay or Smart glass are a huge bonus.  iPad mini or not: The mini will fit in a purse where the larger one will not.  Its lighter which makes it nice to hold for prolonged periods.  It has an option for LTE wireless which non of the other sub 9 inch tables offer.  The screen is non retina which means the applications are smaller.  Speaking with individuals who are above 50 in age that wear glasses they retina does not make a difference for them however they prefer the larger iPad over the new mini.   Happy Shopping this Channuka Season.   The Kosher Coder.   Follow me on twitter @KosherCoder

    Read the article

  • Agile Testing Days 2012 – My First Conference!

    - by Chris George
    I’d like to give you a bit of background first… so please bear with me! In 1996, whilst studying for my final year of my degree, I applied for a job as a C++ Developer at a small software house in Hertfordshire  After bodging up the technical part of the interview I didn’t get the job, but was offered a position as a QA Engineer instead. The role sounded intriguing and the pay was pretty good so in the absence of anything else I took it. Here began my career in the world of software testing! Back then, testing/QA was often an afterthought, something that was bolted on to the development process and very much a second class citizen. Test automation was rare, and tools were basic or non-existent! The internet was just starting to take off, and whilst there might have been testing communities and resources, we were certainly not exposed to any of them. After 8 years I moved to another small company, and again didn’t find myself exposed to any of the changes that were happening in the industry. It wasn’t until I joined Red Gate in 2008 that my view of testing and software development as a whole started to expand. But it took a further 4 years for my view of testing to be totally blown open, and so the story really begins… In May 2012 I was fortunate to land the role of Head of Test Engineering. Soon after, I received an email with details for the “Agile Testi However, in my new role, I decided that it was time to bite the bullet and at least go to one conference. Perhaps I could get some new ideas to supplement and support some of the ideas I already had.ng Days” conference in Potsdam, Germany. I looked over the suggested programme and some of the talks peeked my interest. For numerous reasons I’d shied away from attending conferences in the past, one of the main ones being that I didn’t see much benefit in attending loads of talks when I could just read about stuff like that on the internet. So, on the 18th November 2012, myself and three other Red Gaters boarded a plane at Heathrow bound for Potsdam, Germany to attend Agile Testing Days 2012. Tutorial Day – “Software Testing Reloaded” We chose to do the tutorials on the 19th, I chose the one titled “Software Testing Reloaded – So you wanna actually DO something? We’ve got just the workshop for you. Now with even less powerpoint!”. With such a concise and serious title I just had to see what it was about! I nervously entered the room to be greeted by tables, chairs etc all over the place, not set out and frankly in one hell of a mess! There were a few people in there playing a game with dice. Okaaaay… this is going to be a long day! Actually the dice game was an exercise in deduction and simplification… I found it very interesting and is certainly something I’ll be using at work as a training exercise! (I won’t explain the game here cause I don’t want to let the cat out of the bag…) The tutorial consisted of several games, exploring different aspects of testing. They were all practical yet required a fair amount of thin king. Matt Heusser and Pete Walen were running the tutorial, and presented it in a very relaxed and light-hearted manner. It was really my first experience of working in small teams with testers from very different backgrounds, and it was really enjoyable. Matt & Pete were very approachable and offered advice where required whilst still making you work for the answers! One of the tasks was to devise several strategies for testing some electronic dice. The premise was that a Vegas casino wanted to use the dice to appeal to the twenty-somethings interested in tech, but needed assurance that they were as reliable and random as traditional dice. This was a very interesting and challenging exercise that forced us to challenge various assumptions, determine/clarify requirements but most of all it was frustrating because the dice made a very very irritating beeping noise. Multiple that by at least 12 dice and I was dreaming about them all that night!! Some of the main takeaways that were brilliantly demonstrated through the games were not to make assumptions, challenge requirements, and have fun testing! The tutorial lasted the whole day, but to be honest the day went very quickly! My introduction into the conference experience started very well indeed, and I would talk to both Matt and Pete several times during the 4 days. Days 1,2 & 3 will be coming soon…  

    Read the article

  • Training a 'replacement', how to enforce standards?

    - by Mohgeroth
    Not sure that this is the right stack exchange site to ask this of, but here goes... Scope I work for a small company that employs a few hundred people. The development team for the company is small and works out of visual foxpro. A specific department in the company hired me in as a 'lone gunman' to fix and enhance a pre-existing invoicing system. I've successfully taken an Access application that suffered from a lot of risks and limitations and converted it into a C# application driven off of a SQL server backend. I have recently obtained my undergraduate and am no expert by any means. To help make up for that I've felt that earning microsoft certifications will force me to understand more about .net and how it functions. So, after giving my notice with 9 months in advance, 3 months ago a replacement finally showed up. Their role is to learn what I have been designing to an attempt to support the applications designed in C#. The Replacement Fresh out of college with no real-world work experience, the first instinct for anything involving data was and still is listboxes... any time data is mentioned the list box is the control of choice for the replacement. This has gotten to the point, no matter how many times I discuss other controls, where I've seen 5 listboxes on a single form. Classroom experience was almost all C++ console development. So, an example of where I have concern is in a winforms application: Users need to key Reasons into a table to select from later. Given that I know that a strongly typed data set exists, I can just drag the data source from the toolbox and it would create all of this for me. I realize this is a simple example but using databinding is the key. For the past few months now we have been talking about the strongly typed dataset, how to use it and where it interacts with other controls. Data sets, how they work in relation to binding sources, adapters and data grid views. After handing this project off I expected questions about how to implement these since for me this is the way to do it. What happened next simply floors me: An instance of an adapter from the strongly typed dataset was created in the activate event of the form, a table was created and filled with data. Then, a loop was made to manually add rows to a listbox from this table. Finally, a variable was kept to do lookups to figure out what ID the record was for updates if required. How do they modify records you ask? That was my first question too. You won't believe how simple it is, all you do it double click and they type into a pop-up prompt the new value to change it to. As a data entry operator, all the modal popups would drive me absolutely insane. The final solution exceeds 100 lines of code that must be maintained. So my concern is that none of this is sinking in... the department is only allowed 20 hours a week of their time. Up until last week, we've only been given 4-5 hours a week if I'm lucky. The past week or so, I've been lucky to get 10. Question WHAT DO I DO?! I have 4 weeks left until I leave and they fully 'support' this application. I love this job and the opportunity it has given me but it's time for me to spread my wings and find something new. I am in no way, shape or form convinced that they are ready to take over. I do feel that the replacement has the technical ability to 'figure it out' but instead of learning they just write code to do all of this stuff manually. If the replacement wants to code differently in the end, as long as it works I'm fine with that as horrifiying at it looks. However to support what I have designed they MUST to understand how it works and how I have used controls and the framework to make 'magic' happen. This project has about 40 forms, a database with over 30 some odd tables, triggers and stored procedures. It relates labor to invoices to contracts to projections... it's not as simple as it was three years ago when I began this project and the department is now in a position where they cannot survive without it. How in the world can I accomplish any of the following?: Enforce standards or understanding in constent design when the department manager keeps telling them they can do it however they want to Find a way to engage the replacement in active learning of the framework and system design that support must be given for Gracefully inform sr. management that 5-9 hours a week is simply not enough time to learn about the department, pre-existing processes, applications that need to be supported AND determine where potential enhancements to the system go... Yes I know this is a wall of text, thanks for reading through me but I simply don't know what I should be doing. For me, this job is a monster of a reference and things would look extremely bad if I left and things fell apart. How do I handle this?

    Read the article

  • Yes, I did it - Skydiving in Mauritius

    Finally, I did it or better said we did it. Already back in November last year I saw the big billboard advertisement of Skydive Austral Mauritius near Caudan Waterfront in Port Louis and decided for myself that this is going to be the perfect birthday gift for my wife. Simply out of curiosity I would join her tandem jump with a second instructor. Due to her pregnancy of our son I had to be patient... But then finally, her birthday had arrived and on our midnight celebration session I showed her her netbook with the website preloaded. Actually, it was the "perfect" timing... Recovery from her cesarean is fine, local weather conditions are gorgious and the children were under surveillance of my mum - spending her annual holidays on the island. So, after late wake-up in the morning, we packed our stuff and off we went. According to Google Maps direction indication we had to drive for roughly 50km (only) but traffic here in Mauritius is always challenging. The dropzone is at the Zone Industrielle Mon Loisir Sugar Estate near Riviere du Rempart at the northern east coast. Anyways, we were not in a hurry and arrived there shortly after noon. The access road to the airfield are just small down-driven paths through sugar cane fields and according to our daughter "it's bumpy!". True true true... The facilities at Skydive Austral Mauritius are complete except for food. Enough space for parking, easy handling at the reception and a lot to see for the kids. There's even a big terrace with several sets of tables and chairs, small bar for soft drinks, strictly non-alcoholic. The team over there is all welcoming and warm-hearthy! Having the kids with us was no issue at all. Quite the opposite, our daugther was allowed to discover a lot of things than we adults did. Even visiting the small air plane was on the menu for her. Really great stuff! While waiting for our turn we enjoyed watching other people getting ready in the jump gear, taking off with the Cessna, and finally coming back down on the tandem parachute. Actually, the different expressions on their faces was one of the best parts while waiting. Great mental preparation as my wife was getting more anxious about her first jump... {loadposition content_adsense} First, we got some information about the procedures on the plane about how to get seated, tight up with our instructors and how to get ready for the jump off the plane as soon as we arrive the height of 10.000 ft. All well explained and easy to understand after all.Next, we met with our jumpers Chris and Lee aka "Rasta" to get dressed and ready for take-off. Those guys are really cool and relaxed for their job. From that point on, the DVD session / recording for my wife's birthday started and we really had a lot of fun... The difference between that small Cessna and a commercial flight with an Airbus or a Boeing is astronomic! The climb up to 10.000 ft took us roughly 25 minutes and we enjoyed the magnificent view over the turquoise lagunes near Poste de Flacq, Lafayette and Isle d'Ambre on the north-east coast. After flying through the clouds we sun-bathed and looked over "iced-sugar covered" Mauritius. You might have a look at the picture gallery of Skydive Mauritius for better imagination. The moment of truth, or better said, point of no return came after approximately 25 minutes. The door opens, moving into position on the side on top of the wheel and... out! Back flip and free fall! Slight turns and Wooooohooooo! through the clouds... It so amazing and breath-taking! So undescribable! You have to experience this yourself! Some seconds later the parachute opened and we glided smoothly with some turns and spins back down to the dropzone. The rest of the family could hear and see us soon and the landing was easy going. We never had any doubts or fear about our instructors. They did a great job and we are looking forward to book our next job. I might even consider to follow educational classes on skydiving and earn a license. By the way, feel free to get in touch with Skydive Austral Mauritius. Either via contact details on their website or tweeting a little bit with them. Follow the tweets of Chris and fellows on SkydiveAustral.

    Read the article

  • Adaptive Layout for ADF Faces on Tablets

    - by Shay Shmeltzer
    In the 11.1.16 version of Oracle ADF we started adding specific features to the ADF Faces components so they'll work better on iPad tablets. In this entry I'm going to highlight some new capabilities that we have added to the 11.1.2.3 release. (note if you are still on the 11.1.1.* branch - you'll need to wait for 11.1.1.7 to get the features discussed here). The two key additions in the 11.1.2.3 version compared to the 11.1.1.6 features for iPad support include: pagination for tables and adaptive flow layout. The pagination for table is self explanatory, basically since iPad don't support scroll bars, we automatically switch the table component to render with a pagination toolbar that allow you to scroll set of records or directly jump to a specific set. See the image below. The adaptive flow layout takes a bit more explanation. On regular desktops the UI that you usually build for ADF Faces screens is going to use stretch layout - meaning that it stretches to fill the whole area of the browser window. If you resize the browser windoe, the ADF Faces page resizes with it. If your browser window is too small, scroll bars will appear to allow you to scroll to areas that are "hidden". However on an iPad, this is probably not the type of layout you want - you would rather have a flow layout that eliminates scroll bars and instead allows you to scroll down the page. Basically your want the page to be sized based on its content, rather then based on the browser window size. In ADF Faces terminology this can be done with the dimensionsFrom property set to "children". And here comes the tricky part, since in the past(and also today) when you create an ADF Faces page and add a stretchable component to it, the dimensionsFrom property is set to parent by default. This will be true to other layout components you'll add as well. At this point you might be wondering "Does this mean I'll need to go to each of the layout components in my page and modify the dimensionsFrom property value to be children?" ADF Faces to the rescue... To eliminate the need to do this tedious manual changes, we introduced a new web.xml parameter "oracle.adf.view.rich.geometry.DEFAULT_DIMENSIONS" You'll basically add the following to your web.xml <context-param>    <description>      This parameter controls the default value for component geometry on the page.      Supported values are:        legacy - component attributes use the default values as specified for the attributes                 in the tag documentation (default value)        auto   - component attributes use the correct default value given the value of their                 parent component. For example, with this setting, the panelStretchLayout                 will use "auto" as the default value for its "dimensionsFrom" attribute                 instead of "parent".    </description>    <param-name>oracle.adf.view.rich.geometry.DEFAULT_DIMENSIONS</param-name>    <param-value>auto</param-value>  </context-param> Once you set this parameter, you only need to set the dimensionsFrom attribute for the top level layout component on your page, and the rest of the components will adjust accordingly. One trick that you can use, and that is used in the demo below, is to have the dimensionsFrom property depend on the type of client that access your application. This way you can switch between stretch or flow layout based on the device accessing your application. For example I use the following in my page: <af:panelStretchLayout topHeight="70px" startWidth="0px" endWidth="0px"                                       dimensionsFrom="#{adfFacesContext.agent.capabilities['touchScreen'] eq 'none'  ? 'parent' : 'children' }"> Which results in a flow layout for iPads and a stretch layout for regular browsers. Check out the result in the below demo: &amp;lt;span id=&amp;quot;XinhaEditingPostion&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;

    Read the article

  • SQLIO Writes

    - by Grant Fritchey
    SQLIO is a fantastic utility for testing the abilities of the disks in your system. It has a very unfortunate name though, since it's not really a SQL Server testing utility at all. It really is a disk utility. They ought to call it DiskIO because they'd get more people using I think. Anyway, branding is not the point of this blog post. Writes are the point of this blog post. SQLIO works by slamming your disk. It performs as mean reads as it can or it performs as many writes as it can depending on how you've configured your tests. There are much smarter people than me who will get into all the various types of tests you should run. I'd suggest reading a bit of what Jonathan Kehayias (blog|twitter) has to say or wade into Denny Cherry's (blog|twitter) work. They're going to do a better job than I can describing all the benefits and mechanisms around using this excellent piece of software. My concerns are very focused. I needed to set up a series of tests to see how well our product SQL Storage Compress worked. I wanted to know the effects it would have on a system, the disk for sure, but also memory and CPU. How to stress the system? SQLIO of course. But when I set it up and ran it, following the documentation that comes with it, I was seeing better than 99% compression on the files. Don't get me wrong. Our product is magnificent, wonderful, all things great and beautiful, gets you coffee in the morning and is made mostly from bacon. But 99% compression. No, it's not that good. So what's up? Well, it's the configuration. The default mechanism is to load up a file, something large that will overwhelm your disk cache. You're instructed to load the file with a character 0x0. I never got a computer science degree. I went to film school. Because of this, I didn't memorize ASCII tables so when I saw this, I thought it was zero's or something. Nope. It's NULL. That's right, you're making a very large file, but you're filling it with NULL values. That's actually ok when all you're testing is the disk sub-system. But, when you want to test a compression and decompression, that can be an issue. I got around this fairly quickly. Instead of generating a file filled with NULL values, I just copied a database file for my tests. And to test it with SQL Storage Compress, I used a database file that had already been run through compression (about 40% compression on that file if you're interested). Now the reads were taken care of. I am seeing very realistic performance from decompressing the information for reads through SQLIO. But what about writes? Well, the issue is, what does SQLIO write? I don't have access to the code. But I do have access to the results. I did two different tests, just to be sure of what I was seeing. First test, use the .DAT file as described in the documentation. I opened the .DAT file after I was done with SQLIO, using WordPad. Guess what? It's a giant file full of air. SQLIO writes NULL values. What does that do to compression? I did the test again on a copy of an uncompressed database file. Then I ran the original and the SQLIO modified copy through ZIP to see what happened. I got better than 99% compression out of the SQLIO modified file (original file of 624,896kb went to 275,871kb compressed, after SQLIO it went to 608kb compressed). So, what does SQLIO write? It writes air. If you're trying to test it with compression or maybe some other type of file storage mechanism like dedupe, you need to know this because your tests really won't be valid. Should I find some other mechanism for testing? Yeah, if all I'm interested in is establishing performance to my own satisfaction, yes. But, I want to be able to compare my results with other people's results and we all need to be using the same tool in order for that to happen. SQLIO is the common mechanism that most people I know use to establish disk performance behavior. It'd be better if we could get SQLIO to do writes in some other fashion. Oh, and before I go, I get to brag a bit. Measuring IOPS, SQL Storage Compress outperforms my disk alone by about 30%.

    Read the article

  • Problem with deleting table rows using ctrl+a for row selection

    - by Frank Nimphius
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} The following code is commonly shown and documented for how to access the row key of selected table rows in an ADF Faces table configured for multi row selection. public void onRemoveSelectedTableRows(ActionEvent actionEvent) {    RichTable richTable = … get access to your table instance …    CollectionModel cm =(CollectionModel)richTable.getValue();    RowKeySet rowKeySet = (RowKeySet)richTable.getSelectedRowKeys();             for (Object key : rowKeySet) {       richTable.setRowKey(key);       JUCtrlHierNodeBinding rowData = (JUCtrlHierNodeBinding)cm.getRowData();       // do something with rowData e.g.update, print, copy   }    //optional, if you changed data, refresh the table         AdfFacesContext adfFacesContext = AdfFacesContext.getCurrentInstance(); adfFacesContext.addPartialTarget(richTable);   return null; } The code shown above works for 99.5 % of all use cases that deal with multi row selection enabled ADF Faces tables, except for when users use the ctrl+a key to mark all rows for delete. Just to make sure I am clear: if you use ctrl+a to mark rows to perform any other operation on them – like bulk updating all rows for a specific attribute – then this works with the code shown above. Even for bulk row delete, any other mean of row selection (shift+click and multiple ctrl+click) works like a charm and the rows are deleted. So apparently it is the use of ctrl+a that causes the problem when deleting multiple rows of an ADF Faces table. To implement code that works for all table selection use cases, including the one to delete all table rows in one go, you use the code shown below. public void onRemoveSelectedTableRows(ActionEvent actionEvent) {   RichTable richTable = … get access to your table instance …   CollectionModel cm = (CollectionModel)richTable.getValue();   RowKeySet rowKeySet = (RowKeySet)richTable.getSelectedRowKeys();   Object[] rowKeySetArray = rowKeySet.toArray();      for (Object key : rowKeySetArray){               richTable.setRowKey(key);     JUCtrlHierNodeBinding rowData = (JUCtrlHierNodeBinding)cm.getRowData();                              rowData.getRow().remove();   }   AdfFacesContext adfFacesContext = AdfFacesContext.getCurrentInstance();          adfFacesContext.addPartialTarget(richTable); }

    Read the article

  • Documentation Changes in Solaris 11.1

    - by alanc
    One of the first places you can see Solaris 11.1 changes are in the docs, which have now been posted in the Solaris 11.1 Library on docs.oracle.com. I spent a good deal of time reviewing documentation for this release, and thought some would be interesting to blog about, but didn't review all the changes (not by a long shot), and am not going to cover all the changes here, so there's plenty left for you to discover on your own. Just comparing the Solaris 11.1 Library list of docs against the Solaris 11 list will show a lot of reorganization and refactoring of the doc set, especially in the system administration guides. Hopefully the new break down will make it easier to get straight to the sections you need when a task is at hand. Packaging System Unfortunately, the excellent in-depth guide for how to build packages for the new Image Packaging System (IPS) in Solaris 11 wasn't done in time to make the initial Solaris 11 doc set. An interim version was published shortly after release, in PDF form on the OTN IPS page. For Solaris 11.1 it was included in the doc set, as Packaging and Delivering Software With the Image Packaging System in Oracle Solaris 11.1, so should be easier to find, and easier to share links to specific pages the HTML version. Beyond just how to build a package, it includes details on how Solaris is packaged, and how package updates work, which may be useful to all system administrators who deal with Solaris 11 upgrades & installations. The Adding and Updating Oracle Solaris 11.1 Software Packages was also extended, including new sections on Relaxing Version Constraints Specified by Incorporations and Locking Packages to a Specified Version that may be of interest to those who want to keep the Solaris 11 versions of certain packages when they upgrade, such as the couple of packages that had functionality removed by an (unusual for an update release) End of Feature process in the 11.1 release. Also added in this release is a document containing the lists of all the packages in each of the major package groups in Solaris 11.1 (solaris-desktop, solaris-large-server, and solaris-small-server). While you can simply get the contents of those groups from the package repository, either via the web interface or the pkg command line, the documentation puts them in handy tables for easier side-by-side comparison, or viewing the lists before you've installed the system to pick which one you want to initially install. X Window System We've not had good X11 coverage in the online Solaris docs in a while, mostly relying on the man pages, and upstream X.Org docs. In this release, we've integrated some X coverage into the Solaris 11.1 Desktop Adminstrator's Guide, including sections on installing fonts for fontconfig or legacy X11 clients, X server configuration, and setting up remote access via X11 or VNC. Of course we continue to work on improving the docs, including a lot of contributions to the upstream docs all OS'es share (more about that another time). Security One of the things Oracle likes to do for its products is to publish security guides for administrators & developers to know how to build systems that meet their security needs. For Solaris, we started this with Solaris 11, providing a guide for sysadmins to find where the security relevant configuration options were documented. The Solaris 11.1 Security Guidelines extend this to cover new security features, such as Address Space Layout Randomization (ASLR) and Read-Only Zones, as well as adding additional guidelines for existing features, such as how to limit the size of tmpfs filesystems, to avoid users driving the system into swap thrashing situations. For developers, the corresponding document is the Developer's Guide to Oracle Solaris 11 Security, which has been the source for years for documentation of security-relevant Solaris API's such as PAM, GSS-API, and the Solaris Cryptographic Framework. For Solaris 11.1, a new appendix was added to start providing Secure Coding Guidelines for Developers, leveraging the CERT Secure Coding Standards and OWASP guidelines to provide the base recommendations for common programming languages and their standard API's. Solaris specific secure programming guidance was added via links to other documentation in the product doc set. In parallel, we updated the Solaris C Libary Functions security considerations list with details of Solaris 11 enhancements such as FD_CLOEXEC flags, additional *at() functions, and new stdio functions such as asprintf() and getline(). A number of code examples throughout the Solaris 11.1 doc set were updated to follow these recommendations, changing unbounded strcpy() calls to strlcpy(), sprintf() to snprintf(), etc. so that developers following our examples start out with safer code. The Writing Device Drivers guide even had the appendix updated to list which of these utility functions, like snprintf() and strlcpy(), are now available via the Kernel DDI. Little Things Of course all the big new features got documented, and some major efforts were put into refactoring and renovation, but there were also a lot of smaller things that got fixed as well in the nearly a year between the Solaris 11 and 11.1 doc releases - again too many to list here, but a random sampling of the ones I know about & found interesting or useful: The Privileges section of the DTrace Guide now gives users a pointer to find out how to set up DTrace privileges for non-global zones and what limitations are in place there. A new section on Recommended iSCSI Configuration Practices was added to the iSCSI configuration section when it moved into the SAN Configuration and Multipathing administration guide. The Managing System Power Services section contains an expanded explanation of the various tunables for power management in Solaris 11.1. The sample dcmd sources in /usr/demo/mdb were updated to include ::help output, so that developers like myself who follow the examples don't forget to include it (until a helpful code reviewer pointed it out while reviewing the mdb module changes for Xorg 1.12). The README file in that directory was updated to show the correct paths for installing both kernel & userspace modules, including the 64-bit variants.

    Read the article

  • Developing Schema Compare for Oracle (Part 4): Script Configuration

    - by Simon Cooper
    If you've had a chance to play around with the Schema Compare for Oracle beta, you may have come across this screen in the synchronization wizard: This screen is one of the few screens that, along with the project configuration form, doesn't come from SQL Compare. This screen was designed to solve a couple of issues that, although aren't specific to Oracle, are much more of a problem than on SQL Server: Datatype conversions and NOT NULL columns. 1. Datatype conversions SQL Server is generally quite forgiving when it comes to datatype conversions using ALTER TABLE. For example, you can convert from a VARCHAR to INT using ALTER TABLE as long as all the character values are parsable as integers. Oracle, on the other hand, only allows ALTER TABLE conversions that don't change the internal data format. Essentially, every change that requires an actual datatype conversion has to be done using a rebuild with a conversion function. That's OK, as we can simply hard-code the various conversion functions for the valid datatype conversions and insert those into the rebuild SELECT list. However, as there always is with Oracle, there's a catch. Have a look at the NUMTODSINTERVAL function. As well as specifying the value (or column) to convert, you have to specify an interval_unit, which tells oracle how to interpret the input number. We can't hardcode a default for this parameter, as it is entirely dependent on the user's data context! So, in order to convert NUMBER to INTERVAL DAY TO SECOND/INTERVAL YEAR TO MONTH, we need to have feedback from the user as to what to put in this parameter while we're generating the sync script - this requires a new step in the engine action/script generation to insert these values into the script, as well as new UI to allow the user to specify these values in a sensible fashion. In implementing the engine and UI infrastructure to allow this it made much more sense to implement it for any rebuild datatype conversion, not just NUMBER to INTERVALs. For conversions which we can do, we pre-fill the 'value' box with the appropriate function from the documentation. The user can also type in arbitary SQL expressions, which allows the user to specify optional format parameters for the relevant conversion functions, or indeed call their own functions to convert between values that don't have a built-in conversion defined. As the value gets inserted as-is into the rebuild SELECT list, any expression that is valid in that context can be specified as the conversion value. 2. NOT NULL columns Another problem that is solved by the new step in the sync wizard is adding a NOT NULL column to a table. If the table contains data (as most database tables do), you can't just add a NOT NULL column, as Oracle doesn't know what value to put in the new column for existing rows - the DDL statement will fail. There are actually 3 separate scenarios for this problem that have separate solutions within the engine: Adding a NOT NULL column to a table without a rebuild Here, the workaround is to add a column default with an appropriate value to the column you're adding: ALTER TABLE tbl1 ADD newcol NUMBER DEFAULT <value> NOT NULL; Note, however, there is something to bear in mind about this solution; once specified on a column, a default cannot be removed. To 'remove' a default from a column you change it to have a default of NULL, hence there's code in the engine to treat a NULL default the same as no default at all. Adding a NOT NULL column to a table, where a separate change forced a table rebuild Fortunately, in this case, a column default is not required - we can simply insert the default value into the rebuild SELECT clause. Changing an existing NULL to a NOT NULL column To implement this, we run an UPDATE command before the ALTER TABLE to change all the NULLs in the column to the required default value. For all three, we need some way of allowing the user to specify a default value to use instead of NULL; as this is essentially the same problem as datatype conversion (inserting values into the sync script), we can re-use the UI and engine implementation of datatype conversion values. We also provide the option to alter the new column to allow NULLs, or to ignore the problem completely. Note that there is the same (long-running) problem in SQL Compare, but it is much more of an issue in Oracle as you cannot easily roll back executed DDL statements if the script fails at some point during execution. Furthermore, the engine of SQL Compare is far less conducive to inserting user-supplied values into the generated script. As we're writing the Schema Compare engine from scratch, we used what we learnt from the SQL Compare engine and designed it to be far more modular, which makes inserting procedures like this much easier.

    Read the article

  • "Guiding" a Domain Expert to Retire from Programming

    - by James Kolpack
    I've got a friend who does IT at a local non-profit where they're using a custom web application which is no longer supported by the company who built it. (out of business, support was too expensive, I'm not sure...) Development on this app started around 10+ years ago so the technologies being harnessed are pretty out of date now - classic asp using vbscript and SQL Server 2000. The application domain is in the realm of government bookkeeping - so even though the development team is long gone, there are often new requirements of this software. Enter the... The domain expert. This is an middle aged accounting whiz without much (or any?) prior development experience. He studied the pages, code and queries and learned how to ape the style of the original team which, believe me, is mediocre at best. He's very clever and very tenacious but has no experience in software beyond what he's picked up from this app. Otherwise, he's a pleasant guy to talk to and definitely knows his domain. My friend in IT, and probably his superiors in the company, want him out of the code. They view him as wasting his expertise on coding tasks he shouldn't be doing. My friend got me involved with a few small contracts which I handled without much problem - other than somewhat of a communication barrier with the domain expert. He explained the requirements very quickly, assuming prior knowledge of the domain which I do not have. This is partially his normal style, and I think maybe a bit of resentment from my involvement. So, I think he feels like the owner of the code and has entrenched himself in a development position. So... his coding technique. One of his latest endeavors was to make a page that only he could reach (theoretically - the security model for the system is wretched) where he can enter a raw SQL query, run it, and save the query to run again later. A report that I worked on had been originally implemented by him using 6 distinct queries, 3 or 4 temp tables to coordinate the data between the queries, and the final result obtained by importing the data from the final query into Access and doing a pivot and some formatting. It worked - well, some of the results were incorrect - but at what a cost! (I implemented the report in a single query with at least 1/10th the amount of code.) He edits code in notepad. He doesn't seem to know about online reference material for the languages. I recently read an article on Dr. Dobbs titled "What Makes Bad Programmers Different" - and instantly thought of our domain expert. From the article: Their code is large, messy, and bug laden. They have very superficial knowledge of their problem domain and their tools. Their code has a lot of copy/paste and they have very little interest in techniques that reduce it. The fail to account for edge cases, while inefficiently dealing with the general case. They never have time to comment their code or break it into smaller pieces. Empirical evidence plays no little role in their decisions. 5.5 out of 6. My friend is wanting me to argue the case to their management - specifically, I got this email from their manager to respond to: ...Also, I need to talk to you about what effect there is from Domain Expert continuing to make edits to the live environment. If that is a problem for you I need to know so I can have his access blocked. Some examples would help. In my opinion, from a technical standpoint, it's dangerous to have him making changes without any oversight. On the other hand, I'm just doing one-off contracts at this point and don't have much desire to get involved deeply enough that I'm essentially arguing as one of the Bobs from Office Space. I'd like to help my friend out - but I feel like I'm getting in the middle of a political battle. More importantly - if I do get involved and suggest that his editing privileges be removed, it needs to be handled carefully so that doesn't feel belittled. He is beyond a doubt the foremost expert on this system. I'm hoping this is familiar territory for some other stackechangers, because I'm feeling a little bewildered. How should I respond? Should I argue that he shouldn't be allowed to touch the code? Should I phrase it as "no single developer, no matter how experienced, should be working on production code unchecked"? Should I argue to keep him involved with the code, but with a review process? Should I say "glad I could help, but uh, I'm busy now!" Other options? Thanks a bunch!

    Read the article

  • Cloud Computing Forces Better Design Practices

    - by Herve Roggero
    Is cloud computing simply different than on premise development, or is cloud computing actually forcing you to create better applications than you normally would? In other words, is cloud computing merely imposing different design principles, or forcing better design principles?  A little while back I got into a discussion with a developer in which I was arguing that cloud computing, and specifically Windows Azure in his case, was forcing developers to adopt better design principles. His opinion was that cloud computing was not yielding better systems; just different systems. In this blog, I will argue that cloud computing does force developers to use better design practices, and hence better applications. So the first thing to define, of course, is the word “better”, in the context of application development. Looking at a few definitions online, better means “superior quality”. As it relates to this discussion then, I stipulate that cloud computing can yield higher quality applications in terms of scalability, everything else being equal. Before going further I need to also outline the difference between performance and scalability. Performance and scalability are two related concepts, but they don’t mean the same thing. Scalability is the measure of system performance given various loads. So when developers design for performance, they usually give higher priority to a given load and tend to optimize for the given load. When developers design for scalability, the actual performance at a given load is not as important; the ability to ensure reasonable performance regardless of the load becomes the objective. This can lead to very different design choices. For example, if your objective is to obtains the fastest response time possible for a service you are building, you may choose the implement a TCP connection that never closes until the client chooses to close the connection (in other words, a tightly coupled service from a connectivity standpoint), and on which a connection session is established for faster processing on the next request (like SQL Server or other database systems for example). If you objective is to scale, you may implement a service that answers to requests without keeping session state, so that server resources are released as quickly as possible, like a REST service for example. This alternate design would likely have a slower response time than the TCP service for any given load, but would continue to function at very large loads because of its inherently loosely coupled design. An example of a REST service is the NO-SQL implementation in the Microsoft cloud called Azure Tables. Now, back to cloud computing… Cloud computing is designed to help you scale your applications, specifically when you use Platform as a Service (PaaS) offerings. However it’s not automatic. You can design a tightly-coupled TCP service as discussed above, and as you can imagine, it probably won’t scale even if you place the service in the cloud because it isn’t using a connection pattern that will allow it to scale [note: I am not implying that all TCP systems do not scale; I am just illustrating the scalability concepts with an imaginary TCP service that isn’t designed to scale for the purpose of this discussion]. The other service, using REST, will have a better chance to scale because, by design, it minimizes resource consumption for individual requests and doesn’t tie a client connection to a specific endpoint (which means you can easily deploy this service to hundreds of machines without much trouble, as long as your pockets are deep enough). The TCP and REST services discussed above are both valid designs; the TCP service is faster and the REST service scales better. So is it fair to say that one service is fundamentally better than the other? No; not unless you need to scale. And if you don’t need to scale, then you don’t need the cloud in the first place. However, it is interesting to note that if you do need to scale, then a loosely coupled system becomes a better design because it can almost always scale better than a tightly-coupled system. And because most applications grow overtime, with an increasing user base, new functional requirements, increased data and so forth, most applications eventually do need to scale. So in my humble opinion, I conclude that a loosely coupled system is not just different than a tightly coupled system; it is a better design, because it will stand the test of time. And in my book, if a system stands the test of time better than another, it is of superior quality. Because cloud computing demands loosely coupled systems so that its underlying service architecture can be leveraged, developers ultimately have no choice but to design loosely coupled systems for the cloud. And because loosely coupled systems are better… … the cloud forces better design practices. My 2 cents.

    Read the article

  • Joining on NULLs

    - by Dave Ballantyne
    A problem I see on a fairly regular basis is that of dealing with NULL values.  Specifically here, where we are joining two tables on two columns, one of which is ‘optional’ ie is nullable.  So something like this: i.e. Lookup where all the columns are equal, even when NULL.   NULL’s are a tricky thing to initially wrap your mind around.  Statements like “NULL is not equal to NULL and neither is it not not equal to NULL, it’s NULL” can cause a serious brain freeze and leave you a gibbering wreck and needing your mummy. Before we plod on, time to setup some data to demo against. Create table #SourceTable ( Id integer not null, SubId integer null, AnotherCol char(255) not null ) go create unique clustered index idxSourceTable on #SourceTable(id,subID) go with cteNums as ( select top(1000) number from master..spt_values where type ='P' ) insert into #SourceTable select Num1.number,nullif(Num2.number,0),'SomeJunk' from cteNums num1 cross join cteNums num2 go Create table #LookupTable ( Id integer not null, SubID integer null ) go insert into #LookupTable Select top(100) id,subid from #SourceTable where subid is not null order by newid() go insert into #LookupTable Select top(3) id,subid from #SourceTable where subid is null order by newid() If that has run correctly, you will have 1 million rows in #SourceTable and 103 rows in #LookupTable.  We now want to join one to the other. First attempt – Lets just join select * from #SourceTable join #LookupTable on #LookupTable.id = #SourceTable.id and #LookupTable.SubID = #SourceTable.SubID OK, that’s a fail.  We had 100 rows back,  we didn’t correctly account for the 3 rows that have null values.  Remember NULL <> NULL and the join clause specifies SUBID=SUBID, which for those rows is not true. Second attempt – Lets deal with those pesky NULLS select * from #SourceTable join #LookupTable on #LookupTable.id = #SourceTable.id and isnull(#LookupTable.SubID,0) = isnull(#SourceTable.SubID,0) OK, that’s the right result, well done and 99.9% of the time that is where its left. It is a relatively trivial CPU overhead to wrap ISNULL around both columns and compare that result, so no problems.  But, although that’s true, this a relational database we are using here, not a procedural language.  SQL is a declarative language, we are making a request to the engine to get the results we want.  How we ask for them can make a ton of difference. Lets look at the plan for our second attempt, specifically the clustered index seek on the #SourceTable   There are 2 predicates. The ‘seek predicate’ and ‘predicate’.  The ‘seek predicate’ describes how SQLServer has been able to use an Index.  Here, it has been able to navigate the index to resolve where ID=ID.  So far so good, but what about the ‘predicate’ (aka residual probe) ? This is a row-by-row operation.  For each row found in the index matching the Seek Predicate, the leaf level nodes have been scanned and tested using this logical condition.  In this example [Expr1007] is the result of the IsNull operation on #LookupTable and that is tested for equality with the IsNull operation on #SourceTable.  This residual probe is quite a high overhead, if we can express our statement slightly differently to take full advantage of the index and make the test part of the ‘Seek Predicate’. Third attempt – X is null and Y is null So, lets state the query in a slightly manner: select * from #SourceTable join #LookupTable on #LookupTable.id = #SourceTable.id and ( #LookupTable.SubID = #SourceTable.SubID or (#LookupTable.SubID is null and #SourceTable.SubId is null) ) So its slightly wordier and may not be as clear in its intent to the human reader, that is what comments are for, but the key point is that it is now clearer to the query optimizer what our intention is. Let look at the plan for that query, again specifically the index seek operation on #SourceTable No ‘predicate’, just a ‘Seek Predicate’ against the index to resolve both ID and SubID.  A subtle difference that can be easily overlooked.  But has it made a difference to the performance ? Well, yes , a perhaps surprisingly high one. Clever query optimizer well done. If you are using a scalar function on a column, you a pretty much guaranteeing that a residual probe will be used.  By re-wording the query you may well be able to avoid this and use the index completely to resolve lookups. In-terms of performance and scalability your system will be in a much better position if you can.

    Read the article

  • UK OUG Conference Highlights and Insights

    - by Richard Bingham
    As per my preemptive post, this was the first time the annual conference organized by the UK Oracle User Group (UKOUG) was split into two events, one for Oracle Applications and another in December for Oracle Technology. Apps13, as it was branded, was hailed as a success, with over 1000 registered attendees and three days of sessions, exhibition, round-tables and many other types of content. As this poster on their stand illustrates, the UKOUG is a strong community with popular participants from both big and small Oracle partners and customers. The venue was a more intimate setting than previous years also, allowing everyone to casually bump into those they hoped to. It gave a real feeling of an Apps Community. The main themes over the days where CRM and Customer Experience, HCM, and FIN/SCM. This allowed people to attend just one focused day if they wanted. In addition the Apps Transformation stream ran across all three days, offering insights, advice, and details on the newer product solutions like Fusion Applications.  Here are some of the key take-aways I got from the conference, specific to my role in Fusion Applications Developer Relations: User Experience continues to be a significant reason for adopting some of the newer application products available, with immediately obvious gains in user productivity and satisfaction reported by customers. Also this doesn't stop with the baked-in UX either, with their Design Patterns proving popular and indeed currently being extended to including things like extending on ADF mobile and customizing the Simplified UI. More on this to come from us soon. The executive sessions emphasized the "it's a journey" phrase, illustrating that modern business applications are powered by technologies such as Cloud, Mobile, Social and Big Data and these can be harnessed to help propel your organization forward. Indeed the emphasis is away from the traditional vendor prescribed linear applications road map, and towards plotting a course based on business priorities supported by a broad range of integrated solutions. To help with this several conference sessions demoed the new "Applications Navigator" tool, developed in partnership with OUG members, which offers a visual framework to help organizations plan their Oracle Applications investments around business and technology imperatives. Initial reaction was positive, especially as customers do not need to decipher Oracle's huge product catalog and embeds the best blend of proven and integrated applications solutions. We'll share more on this when it is generally available. Several sessions focused around explanations and interpretation of Oracle OpenWorld 2013, helping highlight the key Oracle Applications messages and directions. With a relative small percentage of conference attendees also at OpenWorld (from a show of hands) this was a popular way to distill the information available down into specific items of interest for the community. Please note the original OpenWorld 2013 content is still available for download but will not remain available forever (via the Oracle website OpenWorld Content Catalog > pick a session > see the PDF download). With the release of E-Business Suite 12.2 the move to develop and deploy on the Fusion Middleware stack becomes a reality for many Oracle Applications customers. This coupled with recent E-Business Suite features such as the Integrated SOA Gateway and the E-Business Suite SDK for Java, illustrates how the gap between the technologies and techniques involved in extending E-Business Suite and Fusion Applications is quickly narrowing. We'll see this merging continue to evolve going forwards. Getting started with Oracle Cloud Applications is actually easier than many customers expected, with a broad selection of both large and medium sized organizations explaining how they added new features to their existing Oracle Applications portfolios. New functionality available from Fusion HCM and CX are popular extensions that do not have to disrupt those core business services. Coexistence is the buzzword here, and the available integration is also simpler than many expected, commonly involving an initial setup data load, then regularly incremental synchronizations, often without a need for real-time constant communication between systems. With much of this pre-built already the implementation process is also quite rapid. With most people dressed in suits, we wanted to get the conversations going without the traditional english reserve, so we decided to make ourselves a bit more obvious, as the photo below shows. This seemed to be quite successful and helped those interested identify and approach us. Keep a look out for similar again. In fact if you're in the UK there is an "Apps Transformation Day" planned by the UKOUG for the 19th March 2014, with more details to follow. Again something we'll be sure to participate in. I am hoping to attend the next half of the UKOUG annual conference, Tech13, that focuses more on Oracle technology and where there is more likely to be larger attendance of those interested in the lower-level aspects of applications customization and development. If you're going, let me know and maybe we can meet up.

    Read the article

  • How can I implement a database TableView like thing in C++?

    - by Industrial-antidepressant
    How can I implement a TableView like thing in C++? I want to emulating a tiny relation database like thing in C++. I have data tables, and I want to transform it somehow, so I need a TableView like class. I want filtering, sorting, freely add and remove items and transforming (ex. view as UPPERCASE and so on). The whole thing is inside a GUI application, so datatables and views are attached to a GUI (or HTML or something). So how can I identify an item in the view? How can I signal it when the table is changed? Is there some design pattern for this? Here is a simple table, and a simple data item: #include <string> #include <boost/multi_index_container.hpp> #include <boost/multi_index/member.hpp> #include <boost/multi_index/ordered_index.hpp> #include <boost/multi_index/random_access_index.hpp> using boost::multi_index_container; using namespace boost::multi_index; struct Data { Data() {} int id; std::string name; }; struct row{}; struct id{}; struct name{}; typedef boost::multi_index_container< Data, indexed_by< random_access<tag<row> >, ordered_unique<tag<id>, member<Data, int, &Data::id> >, ordered_unique<tag<name>, member<Data, std::string, &Data::name> > > > TDataTable; class DataTable { public: typedef Data item_type; typedef TDataTable::value_type value_type; typedef TDataTable::const_reference const_reference; typedef TDataTable::index<row>::type TRowIndex; typedef TDataTable::index<id>::type TIdIndex; typedef TDataTable::index<name>::type TNameIndex; typedef TRowIndex::iterator iterator; DataTable() : row_index(rule_table.get<row>()), id_index(rule_table.get<id>()), name_index(rule_table.get<name>()), row_index_writeable(rule_table.get<row>()) { } TDataTable::const_reference operator[](TDataTable::size_type n) const { return rule_table[n]; } std::pair<iterator,bool> push_back(const value_type& x) { return row_index_writeable.push_back(x); } iterator erase(iterator position) { return row_index_writeable.erase(position); } bool replace(iterator position,const value_type& x) { return row_index_writeable.replace(position, x); } template<typename InputIterator> void rearrange(InputIterator first) { return row_index_writeable.rearrange(first); } void print_table() const; unsigned size() const { return row_index.size(); } TDataTable rule_table; const TRowIndex& row_index; const TIdIndex& id_index; const TNameIndex& name_index; private: TRowIndex& row_index_writeable; }; class DataTableView { DataTableView(const DataTable& source_table) {} // How can I implement this? // I want filtering, sorting, signaling upper GUI layer, and sorting, and ... }; int main() { Data data1; data1.id = 1; data1.name = "name1"; Data data2; data2.id = 2; data2.name = "name2"; DataTable table; table.push_back(data1); DataTable::iterator it1 = table.row_index.iterator_to(table[0]); table.erase(it1); table.push_back(data1); Data new_data(table[0]); new_data.name = "new_name"; table.replace(table.row_index.iterator_to(table[0]), new_data); for (unsigned i = 0; i < table.size(); ++i) std::cout << table[i].name << std::endl; #if 0 // using scenarios: DataTableView table_view(table); table_view.fill_from_source(); // synchronization with source table_view.remove(data_item1); // remove item from view table_view.add(data_item2); // add item from source table table_view.filter(filterfunc); // filtering table_view.sort(sortfunc); // sorting // modifying from source_able, hot to signal the table_view? // FYI: Table view is atteched to a GUI item table.erase(data); table.replace(data); #endif return 0; }

    Read the article

  • ?My Oracle Support???????????????

    - by user763198
    Normal 0 7.8 ? 0 2 false false false EN-US ZH-CN X-NONE Normal 0 7.8 ? 0 2 false false false EN-US ZH-CN X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:????; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:105%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Cambria","serif"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:major-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:major-latin; mso-fareast-language:EN-US; mso-bidi-language:EN-US;} l ????????? ??????Help ??????? Table of Contents? ????Certification? ??,????????: My Oracle Support Help - Certifications (Doc ID 870956.5) l ????????? ?????????????????????????,???My Oracle Support Help - Certifications (Doc ID 870956.5)????????????????????,??? Global Customer Support ????? l ??????????? ????????????: http://www.oracle.com/support/contact.html l ???????????? ????????????: https://shop.oracle.com/pls/ostore/f?p=dstore:home:0:::::&tz=8:00 ????????????????,???? “Contact Us” ??? l ???????,??? Export ??? ???????:    1. ???????,????,????Copy Selected Row,??Actions?????Copy Selected Rows? 2. ??????spreadsheet,??????? email ???? ?????????????table ??,????table ??,????Printable View????????????????,?????????????? ????tables????Export??, ??????export???????CSV???? ??:???Printable View ?Export?????????,??????????????? l ?????????? ?Oracle Education ??????????: http://education.oracle.com ????????????? ???????????,???Oracle Education ??(????????????)? l ????? License codes/Keys? http://www.oracle.com/us/support/licensecodes/index.html ?????????License Codes ?????????????(?????),????????????????????Oracle license code ?????????,????Customer Support Identifier (CSI)? l ???Software Delivery Cloud ????????? ???Software Delivery Cloud ???????? (Doc ID 1070969.1) ??? http://edelivery.oracle.com/ ,Oracle Software Delivery Cloud. ??“Sign In/Register” button. ??Terms & Restrictions ????Continue ????????,????Go ????????Continue ????????? l ??????Oracle.com ???? 1, ? www.oracle.com 2, ???????“Help” 3, ?????,???“Oracle.com login FAQ” 4, ???????????? 5, Oracle.com ???????????? l ????CRM Ondemand ????? ?CRMOD ??,????????: (i) CRMOD ??????: http://www.oracle.com/us/products/applications/crmondemand/support/customer-care-support-337680.html (ii) CRMOD ?????????,?????????????????: https://ebusiness.siebel.com/odcustomercare/contact/contact_cc.asp l ???? Oracle Software Delivery Cloud ??? ????Oracle Software Delivery Cloud – ??Oracle ??????,???/????????email ???????? ???????: 1. ? www.oracle.com ?????Single Sign-On (SSO) ?? 2. ?????"Account" 3. ????,?? "Please verify account(?????)" ,????????????email ID 4. ????????,???????? l ???My Oracle Support??????Oracle Database Patchset ? 1. ?????? 2. ??Product ?Product Family: Oracle database. 3. ????:? Oracle 10.2.0.4. 4. ????:?Microsoft x64 (64-bit). 5. ??????: Patchset/Minipack. 6. ?? Go. l ?????????????FTP ??? ???? CD/DVD ? FTP ??: ???My Oracle Support ( https://support.oracle.com ) ???????Contact Us?? ???????????? CD/DVD ? FTP ???????????? ????????,????????,?????????????,????????????

    Read the article

  • Images missing after moving Django to new server

    - by miszczu
    I'm moving Django project to new server. I'm newbie in Django, and I don't know where should be upload folder. There are all images which should be displayed on website. In config file I haven't seen upload folder I could specify, so I'm guessing it always should be the same location for django projects or I just can't find it. Locations are saved in database. When I've put uploaded files into media folder, so url was like domain.co.uk/media/upload/media/images/year/month/day/image_name.ext and the same is on the old website, images on website ware still missing. All images are visible if I put url by hand, but django doesn't seems to see files. Also I check django log file: 2012-05-30 09:13:33,393 ERROR render: Thumbnail tag failed: [in /usr/lib/python2.6/site-packages/sorl_thumbnail-11.12-py2.6.egg/sorl/thumbnail/templatetags/thumbnail.py (line 49)] Traceback (most recent call last): File "/usr/lib/python2.6/site-packages/sorl_thumbnail-11.12-py2.6.egg/sorl/thumbnail/templatetags/thumbnail.py", line 45, in render return self._render(context) File "/usr/lib/python2.6/site-packages/sorl_thumbnail-11.12-py2.6.egg/sorl/thumbnail/templatetags/thumbnail.py", line 97, in _render file_, geometry, **options File "/usr/lib/python2.6/site-packages/sorl_thumbnail-11.12-py2.6.egg/sorl/thumbnail/base.py", line 50, in get_thumbnail cached = default.kvstore.get(thumbnail) File "/usr/lib/python2.6/site-packages/sorl_thumbnail-11.12-py2.6.egg/sorl/thumbnail/kvstores/base.py", line 25, in get return self._get(image_file.key) File "/usr/lib/python2.6/site-packages/sorl_thumbnail-11.12-py2.6.egg/sorl/thumbnail/kvstores/base.py", line 123, in _get value = self._get_raw(add_prefix(key, identity)) File "/usr/lib/python2.6/site-packages/sorl_thumbnail-11.12-py2.6.egg/sorl/thumbnail/kvstores/cached_db_kvstore.py", line 26, in _get_raw value = KVStoreModel.objects.get(key=key).value File "/usr/lib/python2.6/site-packages/django/db/models/manager.py", line 132, in get return self.get_query_set().get(*args, **kwargs) File "/usr/lib/python2.6/site-packages/django/db/models/query.py", line 344, in get num = len(clone) File "/usr/lib/python2.6/site-packages/django/db/models/query.py", line 82, in __len__ self._result_cache = list(self.iterator()) File "/usr/lib/python2.6/site-packages/django/db/models/query.py", line 273, in iterator for row in compiler.results_iter(): File "/usr/lib/python2.6/site-packages/django/db/models/sql/compiler.py", line 680, in results_iter for rows in self.execute_sql(MULTI): File "/usr/lib/python2.6/site-packages/django/db/models/sql/compiler.py", line 735, in execute_sql cursor.execute(sql, params) File "/usr/lib/python2.6/site-packages/django/db/backends/util.py", line 34, in execute return self.cursor.execute(sql, params) File "/usr/lib/python2.6/site-packages/django/db/backends/mysql/base.py", line 86, in execute return self.cursor.execute(query, args) File "/usr/lib64/python2.6/site-packages/MySQLdb/cursors.py", line 174, in execute self.errorhandler(self, exc, value) File "/usr/lib64/python2.6/site-packages/MySQLdb/connections.py", line 36, in defaulterrorhandler raise errorclass, errorvalue DatabaseError: (1146, "Table 'thumbnail_kvstore' doesn't exist") 2012-05-30 09:13:33,396 DEBUG execute: (0.000) SELECT `freetext_freetext`.`id`, `freetext_freetext`.`key`, `freetext_freetext`.`content`, `freetext_freetext`.`active` FROM `freetext_freetext` WHERE (`freetext_freetext`.`active` = True AND `freetext_freetext`.`key` = office-closed-message ); args=(True, u'office-closed-message') [in /usr/lib/python2.6/site-packages/django/db/backends/util.py (line 44)] 2012-05-30 09:13:33,399 DEBUG execute: (0.000) SELECT `menus_menu`.`id`, `menus_menu`.`name`, `menus_menu`.`slug`, `menus_menu`.`base_url`, `menus_menu`.`description`, `menus_menu`.`enabled` FROM `menus_menu` WHERE (`menus_menu`.`enabled` = True AND `menus_menu`.`slug` = about ); args=(True, u'about') [in /usr/lib/python2.6/site-packages/django/db/backends/util.py (line 44)] 2012-05-30 09:13:33,401 DEBUG execute: (0.000) SELECT `menus_menuitem`.`id`, `menus_menuitem`.`menu_id`, `menus_menuitem`.`title`, `menus_menuitem`.`url`, `menus_menuitem`.`order` FROM `menus_menuitem` INNER JOIN `menus_menu` ON (`menus_menuitem`.`menu_id` = `menus_menu`.`id`) WHERE `menus_menu`.`slug` = about ORDER BY `menus_menuitem`.`order` ASC; args=(u'about',) [in /usr/lib/python2.6/site-packages/django/db/backends/util.py (line 44)] 2012-05-30 09:13:33,404 DEBUG execute: (0.000) SELECT `freetext_freetext`.`id`, `freetext_freetext`.`key`, `freetext_freetext`.`content`, `freetext_freetext`.`active` FROM `freetext_freetext` WHERE (`freetext_freetext`.`active` = True AND `freetext_freetext`.`key` = contactdetails-footer ); args=(True, u'contactdetails-footer') [in /usr/lib/python2.6/site-packages/django/db/backends/util.py (line 44)] I checked database and there is no table calls thumbnail_kvstore, but I have database backup, and in backup files this table doesn't exist. All uploaded files I get are in media/uploads/media/. Also I'm getting errors on some pages: Syntax error. Expected: ``thumbnail source geometry [key1=val1 key2=val2...] as var`` /usr/lib/python2.6/site-packages/sorl_thumbnail-11.12-py2.6.egg/sorl/thumbnail/templatetags/thumbnail.py in __init__, line 72 In template /var/www/vhosts/domain.co.uk/sites/apps/shop/products/templates/products/product_detail.html, error at line 34 {% thumbnail image.file "800x700" detail as zoom %} Maybe some modules I install are not in the right version. Dont know how to fix it. Im using, CentOS 6, mod_wsgi, apache, python 2.6. Update 1.0: On the old server was Django 1.3, on the new one is Django 1.3.1 Update 1.1: I this i know where is the problem. I tried python manage.py syncdb and this is output: Syncing... Creating tables ... The following content types are stale and need to be deleted: orders | ordercontact Any objects related to these content types by a foreign key will also be deleted. Are you sure you want to delete these content types? If you're unsure, answer 'no'. Type 'yes' to continue, or 'no' to cancel: no Installing custom SQL ... Installing indexes ... No fixtures found. Synced: > django.contrib.auth > django.contrib.contenttypes > django.contrib.sessions > django.contrib.sites > django.contrib.messages > django.contrib.admin > django.contrib.admindocs > django.contrib.markup > django.contrib.sitemaps > django.contrib.redirects > django_filters > freetext > sorl.thumbnail > django_extensions > south > currencies > pagination > tagging > honeypot > core > faq > logentry > menus > news > shop > shop.cart > shop.orders Not synced (use migrations): - dbtemplates - contactform - links - media - pages - popularity - testimonials - shop.brands - shop.collections - shop.discount - shop.pricing - shop.product_types - shop.products - shop.shipping - shop.tax (use ./manage.py migrate to migrate these) Next I run python manage.py migrate, and thats what i get: Running migrations for dbtemplates: - Migrating forwards to 0002_auto__del_unique_template_name. > dbtemplates:0001_initial ! Error found during real run of migration! Aborting. ! Since you have a database that does not support running ! schema-altering statements in transactions, we have had ! to leave it in an interim state between migrations. ! You *might* be able to recover with: = DROP TABLE `django_template` CASCADE; [] = DROP TABLE `django_template_sites` CASCADE; [] ! The South developers regret this has happened, and would ! like to gently persuade you to consider a slightly ! easier-to-deal-with DBMS. ! NOTE: The error which caused the migration to fail is further up. Traceback (most recent call last): File "manage.py", line 13, in <module> execute_manager(settings) File "/usr/lib/python2.6/site-packages/django/core/management/__init__.py", line 438, in execute_manager utility.execute() File "/usr/lib/python2.6/site-packages/django/core/management/__init__.py", line 379, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/usr/lib/python2.6/site-packages/django/core/management/base.py", line 191, in run_from_argv self.execute(*args, **options.__dict__) File "/usr/lib/python2.6/site-packages/django/core/management/base.py", line 220, in execute output = self.handle(*args, **options) File "/usr/lib/python2.6/site-packages/South-0.7.3-py2.6.egg/south/management/commands/migrate.py", line 105, in handle ignore_ghosts = ignore_ghosts, File "/usr/lib/python2.6/site-packages/South-0.7.3-py2.6.egg/south/migration/__init__.py", line 191, in migrate_app success = migrator.migrate_many(target, workplan, database) File "/usr/lib/python2.6/site-packages/South-0.7.3-py2.6.egg/south/migration/migrators.py", line 221, in migrate_many result = migrator.__class__.migrate_many(migrator, target, migrations, database) File "/usr/lib/python2.6/site-packages/South-0.7.3-py2.6.egg/south/migration/migrators.py", line 292, in migrate_many result = self.migrate(migration, database) File "/usr/lib/python2.6/site-packages/South-0.7.3-py2.6.egg/south/migration/migrators.py", line 125, in migrate result = self.run(migration) File "/usr/lib/python2.6/site-packages/South-0.7.3-py2.6.egg/south/migration/migrators.py", line 99, in run return self.run_migration(migration) File "/usr/lib/python2.6/site-packages/South-0.7.3-py2.6.egg/south/migration/migrators.py", line 81, in run_migration migration_function() File "/usr/lib/python2.6/site-packages/South-0.7.3-py2.6.egg/south/migration/migrators.py", line 57, in <lambda> return (lambda: direction(orm)) File "/usr/lib/python2.6/site-packages/django_dbtemplates-1.3-py2.6.egg/dbtemplates/migrations/0001_initial.py", line 18, in forwards ('last_changed', self.gf('django.db.models.fields.DateTimeField')(default=datetime.datetime.now)), File "/usr/lib/python2.6/site-packages/South-0.7.3-py2.6.egg/south/db/generic.py", line 226, in create_table ', '.join([col for col in columns if col]), File "/usr/lib/python2.6/site-packages/South-0.7.3-py2.6.egg/south/db/generic.py", line 150, in execute cursor.execute(sql, params) File "/usr/lib/python2.6/site-packages/django/db/backends/util.py", line 34, in execute return self.cursor.execute(sql, params) File "/usr/lib/python2.6/site-packages/django/db/backends/mysql/base.py", line 86, in execute return self.cursor.execute(query, args) File "/usr/lib64/python2.6/site-packages/MySQLdb/cursors.py", line 174, in execute self.errorhandler(self, exc, value) File "/usr/lib64/python2.6/site-packages/MySQLdb/connections.py", line 36, in defaulterrorhandler raise errorclass, errorvalue _mysql_exceptions.OperationalError: (1050, "Table 'django_template' already exists") Also i run python manage.py migrate --list, and uotput is: dbtemplates (*) 0001_initial (*) 0002_auto__del_unique_template_name contactform (*) 0001_initial (*) 0002_auto__add_callback (*) 0003_auto__add_field_callback_notes (*) 0004_auto__add_field_callback_is_closed__add_field_callback_closed (*) 0005_auto__add_field_callback_url (*) 0006_auto__add_contact (*) 0007_auto__add_field_contact_category (*) 0008_auto__add_field_contact_url links (*) 0001_initial (*) 0002_auto__add_field_category_enabled__add_field_category_order media (*) 0001_initial (*) 0002_auto__del_field_image_external_url__add_field_image_link_url__del_fiel (*) 0003_add_model_FileAttachment (*) 0004_auto__chg_field_file_slug__chg_field_image_slug (*) 0005_auto__chg_field_image_file (*) 0006_auto__chg_field_file_file pages (*) 0001_initial (*) 0002_auto__chg_field_page_meta_description__chg_field_page_meta_title__chg_ (*) 0003_auto__add_field_page_show_in_sitemap (*) 0004_auto__add_field_page_changefreq__add_field_page_priority popularity (*) 0001_initial testimonials (*) 0001_initial (*) 0002_auto__add_field_testimonial_is_featured brands (*) 0001_initial (*) 0002_auto__add_field_brand_template (*) 0003_auto__chg_field_brand_meta_description__chg_field_brand_meta_title__ch (*) 0004_auto__add_field_brand_url (*) 0005_auto__del_field_brand_image__add_field_brand_logo collections (*) 0001_initial (*) 0002_auto__add_field_collection_discount (*) 0003_auto__chg_field_collection_meta_description__chg_field_collection_meta (*) 0004_auto__add_field_collection_is_featured (*) 0005_auto__add_field_collection_order discount (*) 0001_initial (*) 0002_added_field_discount_description (*) 0003_auto__add_field_discountvoucher_automatic (*) 0004_auto__add_field_discountvoucher_collection (*) 0005_auto__del_field_discountvoucher_collection (*) 0006_auto__chg_field_discountvoucher_expiry_date pricing (*) 0001_initial (*) 0002_auto__add_pricingrule product_types (*) 0001_initial (*) 0002_auto__add_field_producttype_meta_title__add_field_producttype_meta_des (*) 0003_auto__add_field_producttype_summary__add_field_producttype_description products (*) 0001_initial (*) 0002_auto__del_field_product_is_featured (*) 0003_auto__chg_field_product_meta_keywords__chg_field_product_meta_descript (*) 0004_auto shipping (*) 0001_initial (*) 0002_auto__add_field_shippingmethod_includes_tax__add_field_shippingmethod_ (*) 0003_auto__add_field_shippingmethod_order (*) 0004_auto__del_field_shippingmethod_tax_rate__del_field_shippingmethod_incl (*) 0005_auto__del_field_shippingrule_enabled tax (*) 0001_initial (*) 0002_auto__add_field_taxrate_internal_name (*) 0003_initial_internal_names (*) 0004_auto__add_unique_taxrate_internal_name (*) 0005_force_unique_taxrate_name (*) 0006_auto__add_unique_taxrate_name After that some images source were something like this: src="cache/1e/bd/1ebd719910aa843238028edd5fe49e71.jpg" Is any1 could help me with syncdb pledase?

    Read the article

  • Bulk inserting best way to about it? + Helping me understand fully what I found so far

    - by chobo2
    Hi So I saw this post here and read it and it seems like bulk copy might be the way to go. http://stackoverflow.com/questions/682015/whats-the-best-way-to-bulk-database-inserts-from-c I still have some questions and want to know how things actually work. So I found 2 tutorials. http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx First way uses 2 ado.net 2.0 features. BulkInsert and BulkCopy. the second one uses linq to sql and OpenXML. This sort of appeals to me as I am using linq to sql already and prefer it over ado.net. However as one person pointed out in the posts what he just going around the issue at the cost of performance( nothing wrong with that in my opinion) First I will talk about the 2 ways in the first tutorial I am using VS2010 Express, .net 4.0, MVC 2.0, SQl Server 2005 Is ado.net 2.0 the most current version? Based on the technology I am using, is there some updates to what I am going to show that would improve it somehow? Is there any thing that these tutorial left out that I should know about? BulkInsert I am using this table for all the examples. CREATE TABLE [dbo].[TBL_TEST_TEST] ( ID INT IDENTITY(1,1) PRIMARY KEY, [NAME] [varchar](50) ) SP Code USE [Test] GO /****** Object: StoredProcedure [dbo].[sp_BatchInsert] Script Date: 05/19/2010 15:12:47 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[sp_BatchInsert] (@Name VARCHAR(50) ) AS BEGIN INSERT INTO TBL_TEST_TEST VALUES (@Name); END C# Code /// <summary> /// Another ado.net 2.0 way that uses a stored procedure to do a bulk insert. /// Seems slower then "BatchBulkCopy" way and it crashes when you try to insert 500,000 records in one go. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchInsert() { // Get the DataTable with Rows State as RowState.Added DataTable dtInsertRows = GetDataTable(); SqlConnection connection = new SqlConnection(connectionString); SqlCommand command = new SqlCommand("sp_BatchInsert", connection); command.CommandType = CommandType.StoredProcedure; command.UpdatedRowSource = UpdateRowSource.None; // Set the Parameter with appropriate Source Column Name command.Parameters.Add("@Name", SqlDbType.VarChar, 50, dtInsertRows.Columns[0].ColumnName); SqlDataAdapter adpt = new SqlDataAdapter(); adpt.InsertCommand = command; // Specify the number of records to be Inserted/Updated in one go. Default is 1. adpt.UpdateBatchSize = 1000; connection.Open(); int recordsInserted = adpt.Update(dtInsertRows); connection.Close(); } So first thing is the batch size. Why would you set a batch size to anything but the number of records you are sending? Like I am sending 500,000 records so I did a Batch size of 500,000. Next why does it crash when I do this? If I set it to 1000 for batch size it works just fine. System.Data.SqlClient.SqlException was unhandled Message="A transport-level error has occurred when sending the request to the server. (provider: Shared Memory Provider, error: 0 - No process is on the other end of the pipe.)" Source=".Net SqlClient Data Provider" ErrorCode=-2146232060 Class=20 LineNumber=0 Number=233 Server="" State=0 StackTrace: at System.Data.Common.DbDataAdapter.UpdatedRowStatusErrors(RowUpdatedEventArgs rowUpdatedEvent, BatchCommandInfo[] batchCommands, Int32 commandCount) at System.Data.Common.DbDataAdapter.UpdatedRowStatus(RowUpdatedEventArgs rowUpdatedEvent, BatchCommandInfo[] batchCommands, Int32 commandCount) at System.Data.Common.DbDataAdapter.Update(DataRow[] dataRows, DataTableMapping tableMapping) at System.Data.Common.DbDataAdapter.UpdateFromDataTable(DataTable dataTable, DataTableMapping tableMapping) at System.Data.Common.DbDataAdapter.Update(DataTable dataTable) at TestIQueryable.Program.BatchInsert() in C:\Users\a\Downloads\TestIQueryable\TestIQueryable\TestIQueryable\Program.cs:line 124 at TestIQueryable.Program.Main(String[] args) in C:\Users\a\Downloads\TestIQueryable\TestIQueryable\TestIQueryable\Program.cs:line 16 InnerException: Time it took to insert 500,000 records with insert batch size of 1000 took "2 mins and 54 seconds" Of course this is no official time I sat there with a stop watch( I am sure there are better ways but was too lazy to look what they where) So I find that kinda slow compared to all my other ones(expect the linq to sql insert one) and I am not really sure why. Next I looked at bulkcopy /// <summary> /// An ado.net 2.0 way to mass insert records. This seems to be the fastest. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchBulkCopy() { // Get the DataTable DataTable dtInsertRows = GetDataTable(); using (SqlBulkCopy sbc = new SqlBulkCopy(connectionString, SqlBulkCopyOptions.KeepIdentity)) { sbc.DestinationTableName = "TBL_TEST_TEST"; // Number of records to be processed in one go sbc.BatchSize = 500000; // Map the Source Column from DataTabel to the Destination Columns in SQL Server 2005 Person Table // sbc.ColumnMappings.Add("ID", "ID"); sbc.ColumnMappings.Add("NAME", "NAME"); // Number of records after which client has to be notified about its status sbc.NotifyAfter = dtInsertRows.Rows.Count; // Event that gets fired when NotifyAfter number of records are processed. sbc.SqlRowsCopied += new SqlRowsCopiedEventHandler(sbc_SqlRowsCopied); // Finally write to server sbc.WriteToServer(dtInsertRows); sbc.Close(); } } This one seemed to go really fast and did not even need a SP( can you use SP with bulk copy? If you can would it be better?) BatchCopy had no problem with a 500,000 batch size.So again why make it smaller then the number of records you want to send? I found that with BatchCopy and 500,000 batch size it took only 5 seconds to complete. I then tried with a batch size of 1,000 and it only took 8 seconds. So much faster then the bulkinsert one above. Now I tried the other tutorial. USE [Test] GO /****** Object: StoredProcedure [dbo].[spTEST_InsertXMLTEST_TEST] Script Date: 05/19/2010 15:39:03 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[spTEST_InsertXMLTEST_TEST](@UpdatedProdData nText) AS DECLARE @hDoc int exec sp_xml_preparedocument @hDoc OUTPUT,@UpdatedProdData INSERT INTO TBL_TEST_TEST(NAME) SELECT XMLProdTable.NAME FROM OPENXML(@hDoc, 'ArrayOfTBL_TEST_TEST/TBL_TEST_TEST', 2) WITH ( ID Int, NAME varchar(100) ) XMLProdTable EXEC sp_xml_removedocument @hDoc C# code. /// <summary> /// This is using linq to sql to make the table objects. /// It is then serailzed to to an xml document and sent to a stored proedure /// that then does a bulk insert(I think with OpenXML) /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertXMLBatch() { using (TestDataContext db = new TestDataContext()) { TBL_TEST_TEST[] testRecords = new TBL_TEST_TEST[500000]; for (int count = 0; count < 500000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; testRecords[count] = testRecord; } StringBuilder sBuilder = new StringBuilder(); System.IO.StringWriter sWriter = new System.IO.StringWriter(sBuilder); XmlSerializer serializer = new XmlSerializer(typeof(TBL_TEST_TEST[])); serializer.Serialize(sWriter, testRecords); db.insertTestData(sBuilder.ToString()); } } So I like this because I get to use objects even though it is kinda redundant. I don't get how the SP works. Like I don't get the whole thing. I don't know if OPENXML has some batch insert under the hood but I do not even know how to take this example SP and change it to fit my tables since like I said I don't know what is going on. I also don't know what would happen if the object you have more tables in it. Like say I have a ProductName table what has a relationship to a Product table or something like that. In linq to sql you could get the product name object and make changes to the Product table in that same object. So I am not sure how to take that into account. I am not sure if I would have to do separate inserts or what. The time was pretty good for 500,000 records it took 52 seconds The last way of course was just using linq to do it all and it was pretty bad. /// <summary> /// This is using linq to sql to to insert lots of records. /// This way is slow as it uses no mass insert. /// Only tried to insert 50,000 records as I did not want to sit around till it did 500,000 records. /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertAll() { using (TestDataContext db = new TestDataContext()) { db.CommandTimeout = 600; for (int count = 0; count < 50000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; db.TBL_TEST_TESTs.InsertOnSubmit(testRecord); } db.SubmitChanges(); } } I did only 50,000 records and that took over a minute to do. So I really narrowed it done to the linq to sql bulk insert way or bulk copy. I am just not sure how to do it when you have relationship for either way. I am not sure how they both stand up when doing updates instead of inserts as I have not gotten around to try it yet. I don't think I will ever need to insert/update more than 50,000 records at one type but at the same time I know I will have to do validation on records before inserting so that will slow it down and that sort of makes linq to sql nicer as your got objects especially if your first parsing data from a xml file before you insert into the database. Full C# code using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Xml.Serialization; using System.Data; using System.Data.SqlClient; namespace TestIQueryable { class Program { private static string connectionString = ""; static void Main(string[] args) { BatchInsert(); Console.WriteLine("done"); } /// <summary> /// This is using linq to sql to to insert lots of records. /// This way is slow as it uses no mass insert. /// Only tried to insert 50,000 records as I did not want to sit around till it did 500,000 records. /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertAll() { using (TestDataContext db = new TestDataContext()) { db.CommandTimeout = 600; for (int count = 0; count < 50000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; db.TBL_TEST_TESTs.InsertOnSubmit(testRecord); } db.SubmitChanges(); } } /// <summary> /// This is using linq to sql to make the table objects. /// It is then serailzed to to an xml document and sent to a stored proedure /// that then does a bulk insert(I think with OpenXML) /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertXMLBatch() { using (TestDataContext db = new TestDataContext()) { TBL_TEST_TEST[] testRecords = new TBL_TEST_TEST[500000]; for (int count = 0; count < 500000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; testRecords[count] = testRecord; } StringBuilder sBuilder = new StringBuilder(); System.IO.StringWriter sWriter = new System.IO.StringWriter(sBuilder); XmlSerializer serializer = new XmlSerializer(typeof(TBL_TEST_TEST[])); serializer.Serialize(sWriter, testRecords); db.insertTestData(sBuilder.ToString()); } } /// <summary> /// An ado.net 2.0 way to mass insert records. This seems to be the fastest. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchBulkCopy() { // Get the DataTable DataTable dtInsertRows = GetDataTable(); using (SqlBulkCopy sbc = new SqlBulkCopy(connectionString, SqlBulkCopyOptions.KeepIdentity)) { sbc.DestinationTableName = "TBL_TEST_TEST"; // Number of records to be processed in one go sbc.BatchSize = 500000; // Map the Source Column from DataTabel to the Destination Columns in SQL Server 2005 Person Table // sbc.ColumnMappings.Add("ID", "ID"); sbc.ColumnMappings.Add("NAME", "NAME"); // Number of records after which client has to be notified about its status sbc.NotifyAfter = dtInsertRows.Rows.Count; // Event that gets fired when NotifyAfter number of records are processed. sbc.SqlRowsCopied += new SqlRowsCopiedEventHandler(sbc_SqlRowsCopied); // Finally write to server sbc.WriteToServer(dtInsertRows); sbc.Close(); } } /// <summary> /// Another ado.net 2.0 way that uses a stored procedure to do a bulk insert. /// Seems slower then "BatchBulkCopy" way and it crashes when you try to insert 500,000 records in one go. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchInsert() { // Get the DataTable with Rows State as RowState.Added DataTable dtInsertRows = GetDataTable(); SqlConnection connection = new SqlConnection(connectionString); SqlCommand command = new SqlCommand("sp_BatchInsert", connection); command.CommandType = CommandType.StoredProcedure; command.UpdatedRowSource = UpdateRowSource.None; // Set the Parameter with appropriate Source Column Name command.Parameters.Add("@Name", SqlDbType.VarChar, 50, dtInsertRows.Columns[0].ColumnName); SqlDataAdapter adpt = new SqlDataAdapter(); adpt.InsertCommand = command; // Specify the number of records to be Inserted/Updated in one go. Default is 1. adpt.UpdateBatchSize = 500000; connection.Open(); int recordsInserted = adpt.Update(dtInsertRows); connection.Close(); } private static DataTable GetDataTable() { // You First need a DataTable and have all the insert values in it DataTable dtInsertRows = new DataTable(); dtInsertRows.Columns.Add("NAME"); for (int i = 0; i < 500000; i++) { DataRow drInsertRow = dtInsertRows.NewRow(); string name = "Name : " + i; drInsertRow["NAME"] = name; dtInsertRows.Rows.Add(drInsertRow); } return dtInsertRows; } static void sbc_SqlRowsCopied(object sender, SqlRowsCopiedEventArgs e) { Console.WriteLine("Number of records affected : " + e.RowsCopied.ToString()); } } }

    Read the article

  • JPA exception: Object: ... is not a known entity type.

    - by Toto
    I'm new to JPA and I'm having problems with the autogeneration of primary key values. I have the following entity: package jpatest.entities; import java.io.Serializable; import javax.persistence.Entity; import javax.persistence.GeneratedValue; import javax.persistence.GenerationType; import javax.persistence.Id; @Entity public class MyEntity implements Serializable { private static final long serialVersionUID = 1L; @Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id; public Long getId() { return id; } public void setId(Long id) { this.id = id; } private String someProperty; public String getSomeProperty() { return someProperty; } public void setSomeProperty(String someProperty) { this.someProperty = someProperty; } public MyEntity() { } public MyEntity(String someProperty) { this.someProperty = someProperty; } @Override public String toString() { return "jpatest.entities.MyEntity[id=" + id + "]"; } } and the following main method in other class: public static void main(String[] args) { EntityManagerFactory emf = Persistence.createEntityManagerFactory("JPATestPU"); EntityManager em = emf.createEntityManager(); em.getTransaction().begin(); MyEntity e = new MyEntity("some value"); em.persist(e); /* (exception thrown here) */ em.getTransaction().commit(); em.close(); emf.close(); } This is my persistence unit: <?xml version="1.0" encoding="UTF-8"?> <persistence version="1.0" xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd"> <persistence-unit name="JPATestPU" transaction-type="RESOURCE_LOCAL"> <provider>oracle.toplink.essentials.PersistenceProvider</provider> <class>jpatest.entities.MyEntity</class> <properties> <property name="toplink.jdbc.user" value="..."/> <property name="toplink.jdbc.password" value="..."/> <property name="toplink.jdbc.url" value="jdbc:mysql://localhost:3306/jpatest"/> <property name="toplink.jdbc.driver" value="com.mysql.jdbc.Driver"/> <property name="toplink.ddl-generation" value="create-tables"/> </properties> </persistence-unit> </persistence> When I execute the program I get the following exception in the line marked with the proper comment: Exception in thread "main" java.lang.IllegalArgumentException: Object: jpatest.entities.MyEntity[id=null] is not a known entity type. at oracle.toplink.essentials.internal.sessions.UnitOfWorkImpl.registerNewObjectForPersist(UnitOfWorkImpl.java:3212) at oracle.toplink.essentials.internal.ejb.cmp3.base.EntityManagerImpl.persist(EntityManagerImpl.java:205) at jpatest.Main.main(Main.java:...) What am I missing?

    Read the article

  • How to convert a DataSet object into an ObjectContext (Entity Framework) object on the fly?

    - by Marcel
    Hi all, I have an existing SQL Server database, where I store data from large specific log files (often 100 MB and more), one per database. After some analysis, the database is deleted again. From the database, I have created both a Entity Framework Model and a DataSet Model via the Visual Studio designers. The DataSet is only for bulk importing data with SqlBulkCopy, after a quite complicated parsing process. All queries are then done using the Entity Framework Model, whose CreateQuery Method is exposed via an interface like this public IQueryable<TTarget> GetResults<TTarget>() where TTarget : EntityObject, new() { return this.Context.CreateQuery<TTarget>(typeof(TTarget).Name); } Now, sometimes my files are very small and in such a case I would like to omit the import into the database, but just have a an in-memory representation of the data, accessible as Entities. The idea is to create the DataSet, but instead of bulk importing, to directly transfer it into an ObjectContext which is accessible via the interface. Does this make sense? Now here's what I have done for this conversion so far: I traverse all tables in the DataSet, convert the single rows into entities of the corresponding type and add them to instantiated object of my typed Entity context class, like so MyEntities context = new MyEntities(); //create new in-memory context ///.... //get the item in the navigations table MyDataSet.NavigationResultRow dataRow = ds.NavigationResult.First(); //here, a foreach would be necessary in a true-world scenario NavigationResult entity = new NavigationResult { Direction = dataRow.Direction, ///... NavigationResultID = dataRow.NavigationResultID }; //convert to entities context.AddToNavigationResult(entity); //add to entities ///.... A very tedious work, as I would need to create a converter for each of my entity type and iterate over each table in the DataSet I have. Beware, if I ever change my database model.... Also, I have found out, that I can only instantiate MyEntities, if I provide a valid connection string to a SQL Server database. Since I do not want to actually write to my fully fledged database each time, this hinders my intentions. I intend to have only some in-memory proxy database. Can I do simpler? Is there some automated way of doing such a conversion, like generating an ObjectContext out of a DataSet object? P.S: I have seen a few questions about unit testing that seem somewhat related, but not quite exact.

    Read the article

  • System.Net.Sockets.SocketException: An existing connection was forcibly closed by the remote host

    - by coffeeaddict
    We have 2 identical database tables on our dev server. In both is a table that holds an X509Certificate2 data in one of its fields. When I grab the cert along with the password that I've been using all along, it works over the first database just fine. I run the same code though over the 2nd database and get this error and I don't get why if the setup is exactly the same and the database is also on this server. So I'm not sure why when I switch my connection string to talk to Database2, even though it's setup the same and the code I'm running over it is the same, it complains. Here's the stack trace: An existing connection was forcibly closed by the remote host Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Net.Sockets.SocketException: An existing connection was forcibly closed by the remote host Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [SocketException (0x2746): An existing connection was forcibly closed by the remote host] System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) +232 [IOException: Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host.] System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) +7035903 System.Net.FixedSizeReader.ReadPacket(Byte[] buffer, Int32 offset, Int32 count) +58 System.Net.Security.SslState.StartReadFrame(Byte[] buffer, Int32 readBytes, AsyncProtocolRequest asyncRequest) +116 System.Net.Security.SslState.StartReceiveBlob(Byte[] buffer, AsyncProtocolRequest asyncRequest) +123 System.Net.Security.SslState.ProcessReceivedBlob(Byte[] buffer, Int32 count, AsyncProtocolRequest asyncRequest) +86 System.Net.Security.SslState.StartReceiveBlob(Byte[] buffer, AsyncProtocolRequest asyncRequest) +123 System.Net.Security.SslState.ProcessReceivedBlob(Byte[] buffer, Int32 count, AsyncProtocolRequest asyncRequest) +86 System.Net.Security.SslState.StartReceiveBlob(Byte[] buffer, AsyncProtocolRequest asyncRequest) +123 System.Net.Security.SslState.ProcessReceivedBlob(Byte[] buffer, Int32 count, AsyncProtocolRequest asyncRequest) +86 System.Net.Security.SslState.StartReceiveBlob(Byte[] buffer, AsyncProtocolRequest asyncRequest) +123 System.Net.Security.SslState.ForceAuthentication(Boolean receiveFirst, Byte[] buffer, AsyncProtocolRequest asyncRequest) +7184357 System.Net.Security.SslState.ProcessAuthentication(LazyAsyncResult lazyResult) +217 System.Threading.ExecutionContext.runTryCode(Object userData) +376 System.Runtime.CompilerServices.RuntimeHelpers.ExecuteCodeWithGuaranteedCleanup(TryCode code, CleanupCode backoutCode, Object userData) +0 System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) +98 System.Net.TlsStream.ProcessAuthentication(LazyAsyncResult result) +1134 System.Net.TlsStream.Write(Byte[] buffer, Int32 offset, Int32 size) +88 System.Net.PooledStream.Write(Byte[] buffer, Int32 offset, Int32 size) +20 System.Net.ConnectStream.WriteHeaders(Boolean async) +360 [WebException: The underlying connection was closed: An unexpected error occurred on a send.] System.Web.Services.Protocols.WebClientProtocol.GetWebResponse(WebRequest request) +857631 System.Web.Services.Protocols.HttpWebClientProtocol.GetWebResponse(WebRequest request) +10 System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters) +243 Pmall.PayPal.PayPalApi.PayPalAPIAASoapBinding.SetExpressCheckout(SetExpressCheckoutReq SetExpressCheckoutReq) in C:\www\ssss\ssss\ssss\ssss\Reference.cs:1304 ssss.PayPal.ExpressCheckout.PayPalCheckout.SetExpressCheckout(PaymentDetailsType[] paymentDetails, String returnURL, String cancelURL, PayPalPaymentFlowType paymentFlowType) in C:\www\ssss\ssss\ssss\PayPalCheckout.cs:96 ssss.Web.ssss.SetExpressCheckout() in C:\www\ssss\ssss\ssss.aspx.cs:83 ssss.Web.ssss.Page_Load(Object sender, EventArgs e) in C:\www\ssss\ssss\Register.aspx.cs:24 System.Web.Util.CalliHelper.EventArgFunctionCaller(IntPtr fp, Object o, Object t, EventArgs e) +25 System.Web.Util.CalliEventHandlerDelegateProxy.Callback(Object sender, EventArgs e) +42 System.Web.UI.Control.OnLoad(EventArgs e) +132 System.Web.UI.Control.LoadRecursive() +66 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +2428

    Read the article

  • IF adding new Entity gives error me : EntityCommandCompilationException was unhandled bu user code

    - by programmerist
    i have 5 tables in started projects. if i adds new table (Urun enttiy) writing below codes: project.BAL : public static List<Urun> GetUrun() { using (GenoTipSatisEntities genSatisUrunCtx = new GenoTipSatisEntities()) { ObjectQuery<Urun> urun = genSatisUrunCtx.Urun; return urun.ToList(); } } if i receive data form BAL in UI.aspx: using project.BAL; namespace GenoTip.Web.ContentPages.Satis { public partial class SatisUrun : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { FillUrun(); } } void FillUrun() { ddlUrun.DataSource = SatisServices.GetUrun(); ddlUrun.DataValueField = "ID"; ddlUrun.DataTextField = "Ad"; ddlUrun.DataBind(); } } } i added URun later. error appears ToList method: EntityCommandCompilationException was unhandled bu user code error Detail: Error 1 Error 3007: Problem in Mapping Fragments starting at lines 659, 873: Non-Primary-Key column(s) [UrunID] are being mapped in both fragments to different conceptual side properties - data inconsistency is possible because the corresponding conceptual side properties can be independently modified. C:\Users\pc\Desktop\GenoTip.Satis\GenoTip.DAL\ModelSatis.edmx 660 15 GenoTip.DAL Error 2 Error 3012: Problem in Mapping Fragments starting at lines 659, 873: Data loss is possible in FaturaDetay.UrunID. An Entity with Key (PK) will not round-trip when: (PK does NOT play Role 'FaturaDetay' in AssociationSet 'FK_FaturaDetay_Urun' AND PK is in 'FaturaDetay' EntitySet) C:\Users\pc\Desktop\GenoTip.Satis\GenoTip.DAL\ModelSatis.edmx 874 11 GenoTip.DAL Error 3 Error 3012: Problem in Mapping Fragments starting at lines 659, 873: Data loss is possible in FaturaDetay.UrunID. An Entity with Key (PK) will not round-trip when: (PK is in 'FaturaDetay' EntitySet AND PK does NOT play Role 'FaturaDetay' in AssociationSet 'FK_FaturaDetay_Urun' AND Entity.UrunID is not NULL) C:\Users\pc\Desktop\GenoTip.Satis\GenoTip.DAL\ModelSatis.edmx 660 15 GenoTip.DAL Error 4 Error 3007: Problem in Mapping Fragments starting at lines 748, 879: Non-Primary-Key column(s) [UrunID] are being mapped in both fragments to different conceptual side properties - data inconsistency is possible because the corresponding conceptual side properties can be independently modified. C:\Users\pc\Desktop\GenoTip.Satis\GenoTip.DAL\ModelSatis.edmx 749 15 GenoTip.DAL Error 5 Error 3012: Problem in Mapping Fragments starting at lines 748, 879: Data loss is possible in Satis.UrunID. An Entity with Key (PK) will not round-trip when: (PK does NOT play Role 'Satis' in AssociationSet 'FK_Satis_Urun' AND PK is in 'Satis' EntitySet) C:\Users\pc\Desktop\GenoTip.Satis\GenoTip.DAL\ModelSatis.edmx 880 11 GenoTip.DAL Error 6 Error 3012: Problem in Mapping Fragments starting at lines 748, 879: Data loss is possible in Satis.UrunID. An Entity with Key (PK) will not round-trip when: (PK is in 'Satis' EntitySet AND PK does NOT play Role 'Satis' in AssociationSet 'FK_Satis_Urun' AND Entity.UrunID is not NULL) C:\Users\pc\Desktop\GenoTip.Satis\GenoTip.DAL\ModelSatis.edmx 749 15 GenoTip.DAL

    Read the article

< Previous Page | 316 317 318 319 320 321 322 323 324 325 326 327  | Next Page >