Search Results

Search found 30636 results on 1226 pages for 'database versioning'.

Page 520/1226 | < Previous Page | 516 517 518 519 520 521 522 523 524 525 526 527  | Next Page >

  • saving and retrieving a text file in java?

    - by user3319432
    import java.sql. ; import java.awt.; import javax.swing.; import java.awt.event.; public class saving extends JFrame implements ActionListener{ JTextField edpno=new JTextField(10); JLabel l0= new JLabel ("EDP Number: "); JComboBox fname = new JComboBox(); JLabel l1= new JLabel("First Name: "); JTextField lname= new JTextField(20); JLabel l2= new JLabel("Last Name: "); // JTextField contno= new JTextField(20); // JLabel l3= new JLabel("Contact Number: "); JComboBox contno = new JComboBox(); JLabel l3 = new JLabel ("Contact Number: "); JButton bOK = new JButton("Save"); JButton bRetrieve = new JButton("Retrieve"); private ImageIcon icon; JPanel C=new JPanel(){ protected void paintComponent(Graphics g){ g.drawImage(icon.getImage(),0,0,null); super.paintComponent(g); } }; public Search Record (){ icon=new ImageIcon("images/canres.png"); C.setOpaque(false); C.setLayout(new GridLayout(5,2,4,4)); setTitle("Search Record"); C.add (l0); C.add (edpno); edpno.addActionListener(this); C.add (l1); C.add (fname); fname.setForeground(Color.BLUE); fname.setFont(new Font(" ", Font.BOLD,15)); C.add (l2); C.add (lname); C.add (l3); C.add (contno); contno.setForeground(Color.BLUE); contno.setFont(new Font(" ", Font.BOLD,15)); C.add(bOK); bOK.addActionListener(this); C.add (bRetrieve); bRetrieve.addActionListener(this); bOK.setBackground(Color.white); bRetrieve.setBackground(Color.white); } public void saverecord(){ try{ //Connect to the Database Class.forName("sun.jdbc.odbc.JdbcOdbcDriver"); String path ="jdbc:odbc:;DRIVER=Microsoft Access Driver (*.mdb);DBQ=Database/roomassign.mdb"; String DBPassword =""; String DBUserName =""; Connection con = DriverManager.getConnection(path,"",""); Statement s = con.createStatement(); s.executeQuery("select firstname, Lastname, contact number from name WHERE edpno ='"+edpno.getText()+"'"); ResultSet rs = s.getResultSet(); ResultSetMetaData md = rs.getMetaData(); while(rs.next()) { fname.setSelectedItem(rs.getString(1)); lname.setText(rs.getString(2)); contno.setSelectedItem(rs.getString(3)); // crs.setSelectedItem(rs.getString(4)); } s.close(); con.close(); } catch(Exception Q) { JOptionPane.showMessageDialog(this,Q); } } public void SaveRecord(){ try{ Class.forName("sun.jdbc.odbc.JdbcOdbcDriver"); String path = "jdbc:odbc:;DRIVER=Microsoft Access Driver (*.mdb);DBQ=Database/roomassign.mdb"; String DBPassword =""; String DBUsername =""; Connection con = DriverManager.getConnection(path,"",""); Statement s = con.createStatement(); String sql = "UPDATE rooms SET Firstname='"+fname.getSelectedItem()+"',Lastname='"+lname.getText()+"',Contactnumber='"+contno.getSelectedItem()+"' WHERE '"+edpno.getText()+"'=edpno"; s.executeUpdate(sql); JOptionPane.showMessageDialog(this,"New room Record has been successfully saved"); dispose(); s.close(); con.close(); } catch(Exception Mismatch){ JOptionPane.showMessageDialog(this,Mismatch); } } public void actionPerformed (ActionEvent ako){ if (ako.getSource() == bRetrieve){ dispose(); } else if (ako.getSource() == bOK){ SaveRecord(); } } public static void main (String [] awtsave){ new Search(); } }

    Read the article

  • Future Of F# At Jazoon 2011

    - by Alois Kraus
    I was at the Jazoon 2011 in Zurich (Switzerland). It was a really cool event and it had many top notch speaker not only from the Microsoft universe. One of the most interesting talks was from Don Syme with the title: F# Today/F# Tomorrow. He did show how to use F# scripting to browse through open databases/, OData Web Services, Sharepoint, …interactively. It looked really easy with the help of F# Type Providers which is the next big language feature in a future F# version. The object returned by a Type Provider is used to access the data like in usual strongly typed object model. No guessing how the property of an object is called. Intellisense will show it just as you expect. There exists a range of Type Providers for various data sources where the schema of the stored data can somehow be dynamically extracted. Lets use e.g. a free database it would be then let data = DbProvider(http://.....); data the object which contains all data from e.g. a chemical database. It has an elements collection which contains an element which has the properties: Name, AtomicMass, Picture, …. You can browse the object returned by the Type Provider with full Intellisense because the returned object is strongly typed which makes this happen. The same can be achieved of course with code generators that use an input the schema of the input data (OData Web Service, database, Sharepoint, JSON serialized data, …) and spit out the necessary strongly typed objects as an assembly. This does work but has the downside that if the schema of your data source is huge you will quickly run against a wall with traditional code generators since the generated “deserialization” assembly could easily become several hundred MB. *** The following part contains guessing how this exactly work by asking Don two questions **** Q: Can I use Type Providers within C#? D: No. Q: F# is after all a library. I can reference the F# assemblies and use the contained Type Providers? D: F# does annotate the generated types in a special way at runtime which is not a static type that C# could use. The F# type providers seem to use a hybrid approach. At compilation time the Type Provider is instantiated with the url of your input data. The obtained schema information is used by the compiler to generate static types as usual but only for a small subset (the top level classes up to certain nesting level would make sense to me). To make this work you need to access the actual data source at compile time which could be a problem if you want to keep the actual url in a config file. Ok so this explains why it does work at all. But in the demo we did see full intellisense support down to the deepest object level. It looks like if you navigate deeper into the object hierarchy the type provider is instantiated in the background and attach to a true static type the properties determined at run time while you were typing. So this type is not really static at all. It is static if you define as a static type that its properties shows up in intellisense. But since this type information is determined while you are typing and it is not used to generate a true static type and you cannot use these “intellistatic” types from C#. Nonetheless this is a very cool language feature. With the plotting libraries you can generate expressive charts from any datasource within seconds to get quickly an overview of any structured data storage. My favorite programming language C# will not get such features in the near future there is hope. If you restrict yourself to OData sources you can use LINQPad to query any OData enabled data source with LINQ with ease. There you can query Stackoverflow with The output is also nicely rendered which makes it a very good tool to explore OData sources today.

    Read the article

  • TechEd North America 2012 – Day 1 #msTechEd

    - by Marco Russo (SQLBI)
    Yesterday I and Alberto delivered the PreCon day about BISM Tabular in Analysis Services 2012. We received very good feedback and now I am looking forward to meet people that read our blogs and our books! Ping me on Twitter at @marcorus if you want to contact me during the conference. This is my schedule for the next few days: ·         Monday, June 11, 2012 o   10:30am-12:30pm I will be in the Technical Learning Center area, at the Breaktrough Insights (station #8) in the Database & Business Intelligence area (dedicated to SQL Server 2012) o   I will try to watch some sessions in the afternoon o   6:30pm-7:00pm I will be at the O’Reilly booth meeting book readers and doing some book signing ·         Tuesday, June 12, 2012 o   12:30pm-3:30pm I will be in the Technical Learning Center area, at the Breaktrough Insights (station #8) in the Database & Business Intelligence area (dedicated to SQL Server 2012) o   5:00pm-6:15pm I will attend the Alberto’s session DBI413 Many-to-Many Relationships in BISM Tabular (room S330E) o   6:15pm-9:00pm Community Night & Ask the Experts, we’ll discuss about Analysis Services, Tabular and Multidimensional! ·         Wednesday, June 13, 2012 o   11:15am-11:30am Don’t miss this special demo session at the Private Cloud, Public Cloud and Data Platform Theater in the Technical Learning Center area (next to the SQL Server 2012 zone). I and Alberto will present Querying multi-billion rows with many to many relationships in SSAS Tabular (xVelocity) and you’re invited to guess the response time of DAX queries on a 4 billion rows table with many-to-many relationships before we run them! We’ll give away some 8GB USB key if you guess the right answer! o   12:30pm-1:00pm I and Alberto will have a book signing session at the TechEd Bookstore o   3:00pm-5:00pm I will be in the Technical Learning Center area, at the Breaktrough Insights (station #8) in the Database & Business Intelligence area (dedicated to SQL Server 2012) ·         Thursday, June 14, 2012 o   2:45pm-4:00pm I will deliver my DBI319 BISM: Multidimensional vs. Tabular breakthrough session in room S320A. I expect many questions here! And if you want to learn more about Analysis Services Tabular, we announced two more online sessions of our SSAS Tabular Workshop: ·         July 2-3, 2012 - SSAS Workshop Online - America's time zone ·         September 3-4, 2012 - SSAS Workshop Online - America's time zone Register now if you are interested, the early bird for the July session expires on June 19, 2012! I will also deliver a SSAS Workshop in Oslo (Norway) on August 27-28, 2012.  

    Read the article

  • BizTalk 2009 - The Community ODBC Adapter: Schema Generation

    - by Stuart Brierley
    Having previously detailed the installation of the Community ODBC Adapter for BizTalk 2009, the next thing I will be looking at is the generation of schemas using this ODBC adapter. Within your BizTalk 2009 project, right click the project and select Add Generated Items.  In the resultant window choose Add Adapter Metadata and click Add to open the Add Adapter Wizard. Check that the BizTalk Server and Database names are correct, select the ODBC adapter and click next. You must now set the connection string. To start with choose set, then new DSN (data source name). You now need to define the Data Source you will be connecting to.  On the User DSN tab select Add add then driver you want to use. In this case I am going to use the MySQL ODBC Driver.  A User DSN will only be visible on the current machine with you as a user. * Although I initially set up a User DSN and this was fine for creating schemas with, I later realised that you actually need a system DSN as the BizTalk host service needs this to be able connect to the database on a receive or send port. You will then be asked to Set up the MySQL ODBC Data Source.  In my case this is a local database making use of named pipes, so I had to make sure that I ticked the "Force use of named pipes" check box and removed the "# The Pipe the MySQL Server will use socket=mysql" line from the mysql.ini; with this is place the connection would fail as there is no apparent way to specify the pipe name in the ODBC driver configuration. This will then update the User DSN tab with the new Data Source.  Make sure that you select it and press OK. Select it again in the Choose Data Source window and press OK.  On the ODBC transport window select next. You will now be presented with the Schema Information window, where you must supply the namespace, type and root element names for your schema. Next choose the type of statement that you will be using to create your schema - in this case I am using a stored procedure. *I later discovered that this option is fine for MySQL stored procedures without input parameters, but failed for MySQL stored procedures with input parameters.  (I will be posting on the way to handle input parameters soon) Next you will need to specify the name of the stored procedure.  In this case I have a simple stored procedure to return all the data held by my TestTable in MySQL. Select * from TestTable; The table itself has three columns: Name, Sex and Married. Selecting finish should now hopefully create your schemas based on the input and output from your stored procedure. In my case I have:   An empty schema for the request; after all I have no parameters for the stored procedure.  A response schema comprised of a Table Record with Name, Sex and Married children. Next I will be looking at the use of the ODBC adpater with: Receive ports Send ports

    Read the article

  • Openmatics Revolutionizes Fleet Management with Standards-Based Vehicle Telematics Platform

    - by Michael Snow
    Openmatics s.r.o. was founded in 2010 as a subsidiary of ZF Friedrichshafen AG, a global player in driveline and chassis technology. Oracle Customer:  Openmatics s.r.o.Location:  Pilsen, Czech RepublicIndustry:  AutomotiveEmployees:  70 Its goal was to develop and operate a flexible, open telematics platform for automotive applications, which is independent from vehicle and component suppliers—recognizing that the fragmented telematics market was not meeting today’s fleet management needs. Openmatics provides a rich product portfolio, and customers can extend the platform, as required, to meet their needs. Partners and third-parties can develop their own applications using the Openmatics’ software development kit and can sell them via the Openmatics app shop.ZF Friedrichshafen AG is a global player in driveline and chassis technology. With 121 production companies and 650 service partners in 26 countries, ZF is among the top 10 largest automotive suppliers worldwide. Founded in 1915 to develop and produce transmissions for airships and vehicles, the group’s product offerings now include transmissions and steering systems as well as chassis components and complete axle systems and modules.  A word from Openmatics s.r.o.  “Oracle WebCenter Portal, together with the underlying Oracle Application Development Framework, provided the fundamental infrastructure for the Openmatics platform. Fleet managers can now reduce fuel consumption and operating costs, and more efficiently manage vehicle usage, maintenance, and safety. The standards-based platform allows third-party suppliers to deploy their own vehicle telematics services as Openmatics apps and creates a de facto standard for the automotive industry, independent from a single manufacturer or service provider.” – Gero Strobel, Head of Development, Openmatics s.r.o. Challenges Create an industry standard for vehicle telematics by establishing a customizable platform that enables access to telematics information, such as current and past fuel consumption, through a web browser to better meet automotive market and customer needs Reduce fleet-management costs by eliminating the need to invest in isolated telematics hardware and software solutions per vehicle brand and vehicle component manufacturer Establish an open platform where third-party providers—such as original equipment manufacturers (OEM), insurers, fleet operators, and individual developers—can deploy their own vehicle telematics services Allow users to purchase targeted telematics services as single apps to reduce costs and ensure rapid growth of telematics services available on the platform Enable users to configure their telematics apps with ease to make sure the platform meets individual fleet management requirements, such as analyzing past and current fuel consumption of a truck fleet Solutions Deployed Oracle WebCenter Portal as a foundation for Openmatics, a standards-based automotive telematics platform that provides next-generation fleet management with unified digital communication from and to vehicles on the move Used Oracle Application Development Framework as the development framework for Oracle WebCenter Portal’s components and services, providing developers with ready-to-use software development kits with application programming interfaces, design templates, and visual tools that accelerated time to market Used Oracle Enterprise Pack for Eclipse to simplify telematics application development in Java Enabled fleet monitoring by recording vehicle data—such as fuel consumption information—through onboard units, delivering the information to Oracle Database, and making it accessible through a customizable app portfolio on any web browser Stored vehicle telematics data—sent as encrypted information—in Oracle Database, ensuring data integrity and immediate availability for the platform’s telematics applications Enabled a wide range of telematics services suppliers, from vehicle component manufacturers to fleet application developers, to offer vehicle telematics services on the Openmatics platform, ensuring platform independence from OEMs Provided Openmatics customers with the means to individually select the automotive telematics services that are relevant to their business requirements, eliminating the need to pay for superfluous information and reducing fleet management costs Oracle Products & Services Oracle Application Development Framework Oracle WebCenter Portal Oracle SOA Suite Oracle Enterprise Pack for Eclipse Oracle Database Oracle Consulting &amp;amp;amp;amp;amp;amp;amp;&amp;amp;amp;amp;amp;lt;span id=&amp;amp;amp;amp;amp;quot;XinhaEditingPostion&amp;amp;amp;amp;amp;quot;&amp;amp;amp;amp;amp;gt;&amp;amp;amp;amp;amp;lt;/span&amp;amp;amp;amp;amp;gt;amp;&amp;amp;amp;amp;amp;amp;lt;span id=&amp;amp;amp;amp;amp;amp;quot;XinhaEditingPostion&amp;amp;amp;amp;amp;amp;quot;&amp;amp;amp;amp;amp;amp;gt;&amp;amp;amp;amp;amp;amp;lt;/span&amp;amp;amp;amp;amp;amp;gt;lt;p&amp;amp;amp;amp;amp;amp;amp;amp;gt; &amp;amp;amp;amp;amp;amp;amp;amp;lt;/p&amp;amp;amp;amp;amp;amp;amp;amp;gt;

    Read the article

  • My .NET Technology picks for 2011

    - by shiju
    My Technology predictions for 2011 Cloud computing and Mobile application development will be the hottest trends for 2011. I hope that Windows Azure will be very hot in year 2011 and lot of cloud computing adoption will be happen with Windows Azure on 2011. Web application scalability will be the big challenge for Architects in the next year and architecture approaches like CQRS will get some attention on next year. Architects will look on different options for web application scalability and adoption of NoSQL and Document databases will be more in the year 2011. The following are the my technology picks for .Net stack Windows Azure Windows Azure will be one of the hottest technologies of 2011. Adoption of Cloud and Windows Azure will get big attention on next year. The Windows Azure platform is a flexible cloud–computing platform that lets you focus on solving business problems and addressing customer needs. No need to invest upfront on expensive infrastructure. Pay only for what you use, scale up when you need capacity and pull it back when you don’t. We handle all the patches and maintenance — all in a secure environment with over 99.9% uptime. Silverlight 5 Silverlight is becoming a common technology for variety of development platforms. You can develop Silverlight applications for web, desktop and windows phone. The new Silverlight 5 beta will be available during the starting quarter of the next year with new capabilities and lot of new features. Silverlight 5 will be powerful development platform for both web-based business apps and rich media solutions. We can expect final version of Silverlight 5 on end of 2011. Windows Phone 7 Development Tools Mobile application development will be very hot in year 2011 and Windows Phone 7 will be one of the hottest technologies of next year. You can get introduction on Windows Phone 7 Development Tools from somasegar’s blog post and MSDN documentation available from here. EF Code First I am a big fan of Entity Framework’s Code First approach and hope that Code First approach will attract more people onto Entity Framework 4. EF Code First lets you focus on domain model which will enable Domain-Driven Development for applications. I hope that DDD fans will love the EF Code First approach. The Entity Framework 4 now supports three types of approaches and these will attract different types of developer audience. ASP.NET MVC 3 The ASP.NET MVC 3 will be the hottest technology of Microsoft web stack on the next year. ASP.NET developers will widely move to the ASP.NET MVC Framework from their WebForms development. The new Razor view engine is great and it will increase the adoption of ASP.NET MVC 3. Razor the will improve the productivity when working with ASP.NET MVC 3 Views. You can build great web applications using ASP.NET MVC 3 and jQuery with better maintainability, generation of clean HTML and even better performance. In my opinion, the best technology stack for web development is ASP.NET MVC 3 and Entity Framework 4 Code First as ORM. On the next year, you can expect more articles from my blog on ASP.NET MVC 3 and Entity Framework 4 Code First. RavenDB NoSQL and Document databases will get more attention on the coming year and RavenDB will be the most notable document database in the .NET stack. RavenDB is an Open Source (with a commercial option) document database for the .NET/Windows platform developed by Ayende Rahien. RavenDB is .NET focused document database which comes with a fully functional .NET client API and supports LINQ. I have written few articles on RavenDB and you can read it from here. Managed Extensibility Framework (MEF) Many people didn't realized the power of MEF. The MEF lets you create extensible applications and provides a great solution for the runtime extensibility problem. I hope that .NET developers will more adopt the MEF on the next year for their .NET applications. You can get an excellent introduction on MEF from Anoop Madhusudanan’s blog post MEF or Managed Extensibility Framework – Creating a Zoo and Animals

    Read the article

  • ORE graphics using Remote Desktop Protocol

    - by Sherry LaMonica
    Oracle R Enterprise graphics are returned as raster, or bitmap graphics. Raster images consist of tiny squares of color information referred to as pixels that form points of color to create a complete image. Plots that contain raster images render quickly in R and create small, high-quality exported image files in a wide variety of formats. However, it is a known issue that the rendering of raster images can be problematic when creating graphics using a Remote Desktop connection. Raster images do not display in the windows device using Remote Desktop under the default settings. This happens because Remote Desktop restricts the number of colors when connecting to a Windows machine to 16 bits per pixel, and interpolating raster graphics requires many colors, at least 32 bits per pixel.. For example, this simple embedded R image plot will be returned in a raster-based format using a standalone Windows machine:  R> library(ORE) R> ore.connect(user="rquser", sid="orcl", host="localhost", password="rquser", all=TRUE)  R> ore.doEval(function() image(volcano, col=terrain.colors(30))) Here, we first load the ORE packages and connect to the database instance using database login credentials. The ore.doEval function executes the R code within the database embedded R engine and returns the image back to the client R session. Over a Remote Desktop connection under the default settings, this graph will appear blank due to the restricted number of colors. Users who encounter this issue have two options to display ORE graphics over Remote Desktop: either raise Remote Desktop's Color Depth or direct the plot output to an alternate device. Option #1: Raise Remote Desktop Color Depth setting In a Remote Desktop session, all environment variables, including display variables determining Color Depth, are determined by the RCP-Tcp connection settings. For example, users can reduce the Color Depth when connecting over a slow connection. The different settings are 15 bits, 16 bits, 24 bits, or 32 bits per pixel. To raise the Remote Desktop color depth: On the Windows server, launch Remote Desktop Session Host Configuration from the Accessories menu.Under Connections, right click on RDP-Tcp and select Properties.On the Client Settings tab either uncheck LimitMaximum Color Depth or set it to 32 bits per pixel. Click Apply, then OK, log out of the remote session and reconnect.After reconnecting, the Color Depth on the Display tab will be set to 32 bits per pixel.  Raster graphics will now display as expected. For ORE users, the increased color depth results in slightly reduced performance during plot creation, but the graph will be created instead of displaying an empty plot. Option #2: Direct plot output to alternate device Plotting to a non-windows device is a good option if it's not possible to increase Remote Desktop Color Depth, or if performance is degraded when creating the graph. Several device drivers are available for off-screen graphics in R, such as postscript, pdf, and png. On-screen devices include windows, X11 and Cairo. Here we output to the Cairo device to render an on-screen raster graphic.  The grid.raster function in the grid package is analogous to other grid graphical primitives - it draws a raster image within the current plot's grid.  R> options(device = "CairoWin") # use Cairo device for plotting during the session R> library(Cairo) # load Cairo, grid and png libraries  R> library(grid) R> library(png)  R> res <- ore.doEval(function()image(volcano,col=terrain.colors(30))) # create embedded R plot  R> img <- ore.pull(res, graphics = TRUE)$img[[1]] # extract image  R> grid.raster(as.raster(readPNG(img)), interpolate = FALSE) # generate raster graph R> dev.off() # turn off first device   By default, the interpolate argument to grid.raster is TRUE, which means that what is actually drawn by R is a linear interpolation of the pixels in the original image. Setting interpolate to FALSE uses a sample from the pixels in the original image.A list of graphics devices available in R can be found in the Devices help file from the grDevices package: R> help(Devices)

    Read the article

  • DBA Best Practices - A Blog Series: Episode 2 - Password Lists

    - by Argenis
      Digital World, Digital Locks One of the biggest digital assets that any company has is its secrets. These include passwords, key rings, certificates, and any other digital asset used to protect another asset from tampering or unauthorized access. As a DBA, you are very likely to manage some of these assets for your company - and your employer trusts you with keeping them safe. Probably one of the most important of these assets are passwords. As you well know, the can be used anywhere: for service accounts, credentials, proxies, linked servers, DTS/SSIS packages, symmetrical keys, private keys, etc., etc. Have you given some thought to what you're doing to keep these passwords safe? Are you backing them up somewhere? Who else besides you can access them? Good-Ol’ Post-It Notes Under Your Keyboard If you have a password-protected Excel sheet for your passwords, I have bad news for you: Excel's level of encryption is good for your grandma's budget spreadsheet, not for a list of enterprise passwords. I will try to summarize the main point of this best practice in one sentence: You should keep your passwords on an encrypted, access and version-controlled, backed-up, well-known shared location that every DBA on your team is aware of, and maintain copies of this password "database" on your DBA's workstations. Now I have to break down that statement to you: - Encrypted: what’s the point of saving your passwords on a file that any Windows admin with enough privileges can read? - Access controlled: This one is pretty much self-explanatory. - Version controlled: Passwords change (and I’m really hoping you do change them) and version control would allow you to track what a previous password was if the utility you’ve chosen doesn’t handle that for you. - Backed-up: You want a safe copy of the password list to be kept offline, preferably in long term storage, with relative ease of restoring. - Well-known shared location: This is critical for teams: what good is a password list if only one person in the team knows where it is? I have seen multiple examples of this that work well. They all start with an encrypted database. Certainly you could leverage SQL Server's native encryption solutions like cell encryption for this. I have found such implementations to be impractical, for the most part. Enter The World Of Utilities There are a myriad of open source/free software solutions to help you here. One of my favorites is KeePass, which creates encrypted files that can be saved to a network share, Sharepoint, etc. KeePass has UIs for most operating systems, including Windows, MacOS, iOS, Android and Windows Phone. Other solutions I've used before worth mentioning include PasswordSafe and 1Password, with the latter one being a paid solution – but wildly popular in mobile devices. There are, of course, even more "enterprise-level" solutions available from 3rd party vendors. The truth is that most of the customers that I work with don't need that level of protection of their digital assets, and something like a KeePass database on Sharepoint suits them very well. What are you doing to safeguard your passwords? Leave a comment below, and join the discussion! Cheers, -Argenis

    Read the article

  • What Counts for a DBA: Skill

    - by drsql
    “Practice makes perfect:” right? Well, not exactly. The reality of it all is that this saying is an untrustworthy aphorism. I discovered this in my “younger” days when I was a passionate tennis player, practicing and playing 20+ hours a week. No matter what my passion level was, without some serious coaching (and perhaps a change in dietary habits), my skill level was never going to rise to a level where I could make any money at the sport that involved something other than selling tennis balls at a sporting goods store. My game may have improved with all that practice but I had too many bad practices to overcome. Practice by itself merely reinforces what we know and what we can figure out naturally. The truth is actually closer to the expression used by Vince Lombardi: “Perfect practice makes perfect.” So how do you get to become skilled as a DBA if practice alone isn’t sufficient? Hit the Internet and start searching for SQL training and you can find 100 different sites. There are also hundreds of blogs, magazines, books, conferences both onsite and virtual. But then how do you know who is good? Unfortunately often the worst guide can be to find out the experience level of the writer. Some of the best DBAs are frighteningly young, and some got their start back when databases were stored on stacks of paper with little holes in it. As a programmer, is it really so hard to understand normalization? Set based theory? Query optimization? Indexing and performance tuning? The biggest barrier often is previous knowledge, particularly programming skills cultivated before you get started with SQL. In the world of technology, it is pretty rare that a fresh programmer will gravitate to database programming. Database programming is very unsexy work, because without a UI all you have are a bunch of text strings that you could never impress anyone with. Newbies spend most of their time building UIs or apps with procedural code in C# or VB scoring obvious interesting wins. Making matters worse is that SQL programming requires mastery of a much different toolset than most any mainstream programming skill. Instead of controlling everything yourself, most of the really difficult work is done by the internals of the engine (written by other non-relational programmers…we just can’t get away from them.) So is there a golden road to achieving a high skill level? Sadly, with tennis, I am pretty sure I’ll never discover it. However, with programming it seems to boil down to practice in applying the appropriate techniques for whatever type of programming you are doing. Can a C# programmer build a great database? As long as they don’t treat SQL like C#, absolutely. Same goes for a DBA writing C# code. None of this stuff is rocket science, as long as you learn to understand that different types of programming require different skill sets and you as a programmer must recognize the difference between one of the procedural languages and SQL and treat them differently. Skill comes from practicing doing things the right way and making “right” a habit.

    Read the article

  • Oracle Excellence Award

    - by Hartmut Wiese
    CALL FOR NOMINATIONS 2014 Oracle Excellence Award: Sustainability Innovation Is your organization using an Oracle product to help with a sustainability initiative while reducing costs? Saving energy? Saving gas? Saving paper? For example, you may use Oracle’s Agile Product Lifecycle Management to design more eco-friendly products, Oracle Transportation Management to reduce fleet emissions, Oracle Exadata Database Machine to decrease power and cooling needs while increasing database performance, Oracle Business Intelligence to measure environmental impacts, or one of many other Oracle products. Your organization may be eligible for the 2014 Oracle Excellence Award: Sustainability Innovation. Submit a nomination form located here by Friday June 20 if your company is using any Oracle product to take an environmental lead as well as to reduce costs and improve business efficiencies by using green business practices. These awards will be presented during Oracle OpenWorld 2014 (September 28-October 2) in San Francisco.  Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 About the Award • Winners will be selected from the customer and/or partner nominations. Either a customer, their partner, or Oracle representative can submit the nomination form on behalf of the customer.• There is a nomination form here to discuss your use of Oracle products and how they have helped your sustainability efforts and reduced costs. • Winners will be selected based on the extent of the environmental impact they have had as well as the business efficiencies they have achieved through their combined use of Oracle products. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Nomination Eligibility • Your company uses at least one component of Oracle products, whether it's the Oracle database, business applications, Fusion Middleware, or Sun servers/storage. • This solution should be in production or in active development. • Nomination deadline: Friday June 20, 2014. Benefits to Award Winners • Award presented to winners during Oracle OpenWorld by Jeff Henley, Oracle Chairman of the Board • Free Oracle OpenWorld registration pass for each winning customer • 2014 Oracle Excellence Award: Sustainability Innovation award logo for inclusion on your own website &/or press release • Possible placement in Oracle Profit Magazine &/or Oracle Magazine • ‘Enable the Eco-Enterprise’ podcast opportunity See last year's winners here Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 ______________________________________________________________________________________ Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Questions? Send an email to: [email protected] Follow Oracle’s Sustainability Solutions on Twitter, LinkedIn, YouTube, and the Sustainability Matters blog Web page with award details:  http://www.oracle.com/us/products/applications/green/call-for-nominations-185050.html  

    Read the article

  • Rethinking Oracle Optimizer Statistics for P6 Part 2

    - by Brian Diehl
    In the previous post (Part 1), I tried to draw some key insights about the relationship between P6 and Oracle Optimizer Statistics.  The first is that average cardinality has the greatest impact on query optimization and that the particular queries generated by P6 are more likely to use this average during calculations. The second is that these are statistics that are unlikely to change greatly over the life of the application. Ultimately, our goal is to get the best query optimization possible.  Or is it? Stability No application administrator wants to get the call at 9am that their application users cannot get there work done because everything is running slow. This is a possibility with a regularly scheduled nightly collection of statistics. It may not just be slow performance, but a complete loss of service because one or more queries are optimized poorly. Ideally, this should not be the case. The database optimizer should make better decisions with more up-to-date data. Better statistics may give incremental performance benefit. However, this benefit must be balanced against the potential cost of system down time.  It is stability that we ultimately desire and not absolute optimal performance. We do want the benefit from more accurate statistics and better query plans, but not at the risk of an unusable system. As a result, I've developed the following methodology around managing database statistics for the P6 database.  1. No Automatic Re-Gathering - The daily, weekly, or other interval of statistic gathering is unlikely to be beneficial. Quite the opposite. It is more likely to cause problems. 2. Smart Re-Gathering - The time to collect statistics is when things have changed significantly. For a new installation of P6, this is happening more often because the data is growing from a few rows to thousands and more. But for a mature system, the data is not changing significantly from week-to-week. There are times to collect statistics: New releases of the application Changes in the underlying hardware or software versions (ex. new Oracle RDBMS version) When additional user groups are added. The new groups may use the software in significantly different ways. After significant changes in the data. This may be monthly, quarterly or yearly.  3. Always Test - If you take away one thing from this post, it would be to always have a plan to test after changing statistics. In reality, statistics can be collected as often as you desire provided there are tests in place to verify that performance is the same or better. These might be automated tests or simply a manual script of application functions. 4. Have a Way Out - Never change the statistics without a way to return to the previous set. Think of the statistics as one part of the overall application code that also includes the source code--both application and RDBMS. It would be foolish to change to the new code without a way to get back to the previous version. In the final post, I will talk about the actual script I created for P6 PMDB and possible future direction for managing query performance. 

    Read the article

  • ZFS Storage Appliance ? ldap ??????

    - by user13138569
    ZFS Storage Appliance ? Openldap ????????? ???ldap ?????????????? Solaris 11 ? Openldap ????????????? ??? slapd.conf ??ldif ?????????? user01 ??????? ?????? slapd.conf # # See slapd.conf(5) for details on configuration options. # This file should NOT be world readable. # include /etc/openldap/schema/core.schema include /etc/openldap/schema/cosine.schema include /etc/openldap/schema/nis.schema # Define global ACLs to disable default read access. # Do not enable referrals until AFTER you have a working directory # service AND an understanding of referrals. #referral ldap://root.openldap.org pidfile /var/openldap/run/slapd.pid argsfile /var/openldap/run/slapd.args # Load dynamic backend modules: modulepath /usr/lib/openldap moduleload back_bdb.la # moduleload back_hdb.la # moduleload back_ldap.la # Sample security restrictions # Require integrity protection (prevent hijacking) # Require 112-bit (3DES or better) encryption for updates # Require 63-bit encryption for simple bind # security ssf=1 update_ssf=112 simple_bind=64 # Sample access control policy: # Root DSE: allow anyone to read it # Subschema (sub)entry DSE: allow anyone to read it # Other DSEs: # Allow self write access # Allow authenticated users read access # Allow anonymous users to authenticate # Directives needed to implement policy: # access to dn.base="" by * read # access to dn.base="cn=Subschema" by * read # access to * # by self write # by users read # by anonymous auth # # if no access controls are present, the default policy # allows anyone and everyone to read anything but restricts # updates to rootdn. (e.g., "access to * by * read") # # rootdn can always read and write EVERYTHING! ####################################################################### # BDB database definitions ####################################################################### database bdb suffix "dc=oracle,dc=com" rootdn "cn=Manager,dc=oracle,dc=com" # Cleartext passwords, especially for the rootdn, should # be avoid. See slappasswd(8) and slapd.conf(5) for details. # Use of strong authentication encouraged. rootpw secret # The database directory MUST exist prior to running slapd AND # should only be accessible by the slapd and slap tools. # Mode 700 recommended. directory /var/openldap/openldap-data # Indices to maintain index objectClass eq ?????????ldif???? dn: dc=oracle,dc=com objectClass: dcObject objectClass: organization dc: oracle o: oracle dn: cn=Manager,dc=oracle,dc=com objectClass: organizationalRole cn: Manager dn: ou=People,dc=oracle,dc=com objectClass: organizationalUnit ou: People dn: ou=Group,dc=oracle,dc=com objectClass: organizationalUnit ou: Group dn: uid=user01,ou=People,dc=oracle,dc=com uid: user01 objectClass: top objectClass: account objectClass: posixAccount objectClass: shadowAccount cn: user01 uidNumber: 10001 gidNumber: 10000 homeDirectory: /home/user01 userPassword: secret loginShell: /bin/bash shadowLastChange: 10000 shadowMin: 0 shadowMax: 99999 shadowWarning: 14 shadowInactive: 99999 shadowExpire: -1 ldap?????????????ZFS Storage Appliance??????? Configuration SERVICES LDAP ??Base search DN ?ldap??????????? ???? ldap ????????? user01 ???????????????? ???????????? user ????????? Unknown or invalid user ?????????????????? ????????????????Solaris 11 ???????????? ????????????? ldap ????????getent ??????????????? # svcadm enable svc:/network/nis/domain:default # svcadm enable ldap/client # ldapclient manual -a authenticationMethod=none -a defaultSearchBase=dc=oracle,dc=com -a defaultServerList=192.168.56.201 System successfully configured # getent passwd user01 user01:x:10001:10000::/home/user01:/bin/bash ????????? user01 ?????????????? # mount -F nfs -o vers=3 192.168.56.101:/export/user01 /mnt # su user01 bash-4.1$ cd /mnt bash-4.1$ touch aaa bash-4.1$ ls -l total 1 -rw-r--r-- 1 user01 10000 0 May 31 04:32 aaa ?????? ldap ??????????????????????????!

    Read the article

  • Iterate over rows/checkboxes in a RadGrid

    - by ChessWhiz
    Hi, I have a Telerik RadGrid with a GridTemplateColumn that contains a checkbox, as follows: <telerik:GridTemplateColumn HeaderText="MINE" UniqueName="MyTemplateColumn"> <ItemTemplate> <asp:CheckBox id="MyCheckBox" runat="server"></asp:CheckBox> </ItemTemplate> </telerik:GridTemplateColumn> I want to set the box to be "checked" based on a value read from the database. I could handle the ItemDataBound event and read the database when each row is bound, but that results in n lookups. Instead, I want to handle DataBound, and then set all the values at once. So, in that method, I want code like this: // read all values from database first, then... foreach(var chkbox in MyRadGrid.MasterTableView.Columns.FindByUniqueName("MyTemplateColumn").FindControl("MyCheckBox")) { chkbox.Checked = oneValue; } That doesn't work, because FindControl isn't a method of GridColumn, and it won't generate an iterable list of the checkboxes. What is the correct way to iterate through the checkboxes in the template column? Thanks!

    Read the article

  • Exception "The operation is not valid for the state of the transaction" using TransactionScope

    - by Lanfear
    We have a web service on server #1 and a database on server #2. Web service uses transaction scope to produce distributed transaction. Everything is correct. And we have another database on server #3. We had some problems with this server and we reinstalled operation system and software. We configured MSDTC and tried to use web service from server #1 to communicate with database on this server. And now after first select statement within transaction scope we get: "The operation is not valid for the state of the transaction". This exception falls in every web service request if it is using transaction scope. Server #2 and Server #3 is almost similar. The difference can be only in settings. .NET framework 3.5 SP1 installed and SQL Server SP3 on all servers. Full stacktrace: System.Transactions.TransactionState.EnlistPromotableSinglePhase(InternalTransaction tx, IPromotableSinglePhaseNotification promotableSinglePhaseNotification, Transaction atomicTransaction) ? System.Transactions.Transaction.EnlistPromotableSinglePhase(IPromotableSinglePhaseNotification promotableSinglePhaseNotification) ? System.Data.SqlClient.SqlInternalConnection.EnlistNonNull(Transaction t ? System.Data.SqlClient.SqlInternalConnection.Enlist(Transaction t ? System.Data.SqlClient.SqlInternalConnectionTds.Activate(Transaction transaction) ? System.Data.ProviderBase.DbConnectionInternal.ActivateConnection(Transaction transaction) ? System.Data.ProviderBase.DbConnectionPool.GetConnection(DbConnection owningObject) ? System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection) ? System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory) ? System.Data.SqlClient.SqlConnection.Open() ? NHibernate.Connection.DriverConnectionProvider.GetConnection() ? NHibernate.Impl.SessionFactoryImpl.OpenConnection() I searched this message but didn't found any appropriate solution. So what settings should I check and what exactly should I do to fix it?

    Read the article

  • InvalidOperationException when calling SaveChanges in .NET Entity framework

    - by Pär Björklund
    Hi, I'm trying to learn how to use the Entity framework but I've hit an issue I can't solve. What I'm doing is that I'm walking through a list of Movies that I have and inserts each one into a simple database. This is the code I'm using private void AddMovies(DirectoryInfo dir) { MovieEntities db = new MovieEntities(); foreach (DirectoryInfo d in dir.GetDirectories()) { Movie m = new Movie { Name = d.Name, Path = dir.FullName }; db.AddToMovies(movie); } db.SaveChanges(); } When I do this I get an exception at db.SaveChanges() that read. The changes to the database were committed successfully, but an error occurred while updating the object context. The ObjectContext might be in an inconsistent state. Inner exception message: AcceptChanges cannot continue because the object's key values conflict with another object in the ObjectStateManager. Make sure that the key values are unique before calling AcceptChanges. I haven't been able to find out what's causing this issue. My database table contains three columns Id int autoincrement Name nchar(255) Path nchar(255) Update: I Checked my edmx file and the SSDL section have the StoreGeneratedPattern="Identity" as suggested. I also followed the blog post and tried to add ClientAutoGenerated="true" and StoreGenerated="true" in the CSDL as suggested there. This resulted in compile errors ( Error 5: The 'ClientAutoGenerated' attribute is not allowed.). Since the blog post is from 2006 and it has a link to a follow up post I assume it's been changed. However, I cannot read the followup post since it seems to require an msdn account.

    Read the article

  • Mixed surrogate composite key insert in JPA 2.0, PostgreSQL and Hibernate 3.5

    - by Gerald
    First off, we are using JPA 2.0 and Hibernate 3.5 as persistence provider on a PostgreSQL database. We successfully use the sequence of the database via the JPA 2.0 annotations as an auto-generated value for single-field-surrogate-keys and all works fine. Now we are implementing a bi-temporal database-scheme that requires a mixed key in the following manner: Table 1: id (pk, integer, auto-generated-sequence) validTimeBegin (pk, dateTime) validTimeEnd (dateTime) firstName (varChar) Now we have a problem. You see, if we INSERT a new element, the field id is auto-generated and that's fine. Only, if we want to UPDATE the field within this scheme, then we have to change the validTimeBegin column WITHOUT changing the id-field and insert it as a new row like so: BEFORE THE UPDATE OF THE ROW: |---|-------------------------|-------------------------|-------------------| | id| validTimeBegin | validTimeEnd | firstName | |---|-------------------------|-------------------------|-------------------| | 1| 2010-05-01-10:00:00.000 | NULL | Gerald | |---|-------------------------|-------------------------|-------------------| AFTER THE UPDATE OF THE ROW happening at exactly 2010-05-01-10:35:01.788 server-time: (we update the person with the id:1 to reflect his new first name...) |---|-------------------------|-------------------------|-------------------| | id| validTimeBegin | validTimeEnd | firstName | |---|-------------------------|-------------------------|-------------------| | 1| 2010-05-01-10:00:00.000 | 2010-05-01-10:35:01.788 | Gerald | |---|-------------------------|-------------------------|-------------------| | 1| 2010-05-01-10:35:01.788 | NULL | Jerry | |---|-------------------------|-------------------------|-------------------| So our problem is, that this doesn't work at all using an auto-generated-sequence for the field id because when inserting a new row then the id ALWAYS is auto-generated although it really is part of a composite key which should sometimes behave differently. So my question is: Is there a way to tell hibernate via JPA to stop auto-generating the id-field in the case I want to generate a new variety of the same person and go on as usual in every other case or do I have to take over the whole id-generation with custom code? Thanks in advance, Gerald

    Read the article

  • NHibernate SubclassMap gives DuplicateMappingException

    - by stiank81
    I'm using NHibernate to handle my database - with Fluent configuration. I'm not using Automappings. All mappings are written explicitly, and everything is working just fine. Now I wanted to add my first mapping to a subclass, using the SubclassMap, and I run into problems. With the simplest possible setup an Nhibernate DuplicateMappingException is thrown, saying that the subclass is mapped more than once: NHibernate.MappingException : Could not compile the mapping document: (XmlDocument) ---- NHibernate.DuplicateMappingException : Duplicate class/entity mapping MyNamespace.SubPerson I get this with my simple classes written for testing: public class Person { public int Id { get; set; } public string Name { get; set; } } public class SubPerson : Person { public string Foo { get; set; } } With the following mappings: public class PersonMapping : ClassMap<Person> { public PersonMapping() { Not.LazyLoad(); Id(c => c.Id); Map(c => c.Name); } } public class SubPersonMapping : SubclassMap<SubPerson> { public SubPersonMapping() { Not.LazyLoad(); Map(m => m.Foo); } } Any idea why this is happening? If there were automappings involved I guess it might have been caused by the automappings adding a mapping too, but there should be no automapping. I create my database specifying a fluent mapping: private static ISession CreateSession() { var cfg = Fluently.Configure(). Database(SQLiteConfiguration.Standard.ShowSql().UsingFile("unit_test.db")). Mappings(m => m.FluentMappings.AddFromAssemblyOf<SomeClassInTheAssemblyContainingAllMappings>()); var sessionSource = new SessionSource(cfg.BuildConfiguration().Properties, new TestModel()); var session = sessionSource.CreateSession(); _sessionSource.BuildSchema(session); return session; } Again; note that this only happens with SubclassMap. ClassMap's are working just fine!

    Read the article

  • Windows 7 - Enable Network DTC Access

    - by Russ Clark
    I have a Visual Studio 2010 Windows Forms application in which I start a transaction using the TransactionScope class. I then Receive a message from a Sql Server Broker Services message queue, which works fine. I next try to call a stored procedure from the same database with a call to my data access layer which is a Visual Studio dataset (xsd file). When I make this second call to the database I get the following error message: The MSDTC transaction manager was unable to pull the transaction from the source transaction manager due to communication problems. Possible causes are: a firewall is present and it doesn't have an exception for the MSDTC process, the two machines cannot find each other by their NetBIOS names, or the support for network transactions is not enabled for one of the two transaction managers. (Exception from HRESULT: 0x8004D02B). I've seen several posts on the web that talk about Enabling DTC access through dcomcnfg.exe, and allowing DTC to communicate through Windows Firewall. I've done those things, and am still having this problem. I know our remote database server is setup to Enable DTC access, because we are using similar transactions in other projects built with Visual Studio 2008 on Windows XP and Vista. I think there is something specific about Windows 7 and Visual Studio 2010 causing this problem, but haven't been able to find out what it is. Can anyone help with this problem?

    Read the article

  • Pass dataset to subreport with SQL Server Reporting Services

    - by Juliet
    I'm using SQL Server Reporting Services and the report designer that comes with Visual Studio. I've got a really big report. It's actually so large that Visual Studio hangs (sometimes for hours at a time) or just crashes when I make changes. There is preciously little I can do to solve the problem, so I've decided to just move the bottom half of the report into a sub-report. So, I started with one enormous, unresponsive report and ended with two small, manageable reports -- surprisingly, this actually works. One problem: my subreport uses the same data as my main report. Right now, it populates its dataset by re-querying the database. The extra round-trip to the database causes the report to take twice as long to generate; up from 45 minutes to 1 1/2 hours to generate. I'd like to avoid hitting the database again, and instead use the same dataset in both reports. How can I share or pass a dataset between a report and subreport?

    Read the article

  • EF 4, POCO and AddOrUpdate

    - by Eric J.
    I'm trying to create a method on an EF 4 POCO repository called AddOrUpdate. The idea is that the business layer can pass in a POCO object and the persistence framework will add the object if it is new (not yet in the database), else will update the database (once SaveChanges() is called) with the new value. This is similar to some other questions I have asked about EF, but I'm only about 80% there in understanding this so please forgive partial duplication. The part I'm missing is how to update the object graph in my ObjectContext/associated ObjectSet for the passed-in business object once I have determined that the business object indeed already exists in the database (and now has been loaded thanks to TryGetObjectByKey). ApplyCurrentValues sounds sort of like what I want, but it only copies scalar values and doesn't seem intended to update the object graph in the ObjectContext/ObjectSet. Because of my particular use case I don't care about concurrency right now. public void AddOrUpdate(BO biz) { object obj; EntityKey ek = Ctx.CreateEntityKey(mySetName, biz); bool found = Ctx.TryGetObjectByKey(ek, out obj); if (found) { // How do I do what this method name implies? Biz is a parent with children. mySet.TellTheSetToUpdateThisObject(biz); } else { mySet.AddObject(biz); } Ctx.DetectChanges(); }

    Read the article

  • WPF Richtextbox XamlWriter behaviour

    - by Krishna
    I am trying to save some c# source code into the database. Basically I have a RichTextBox that users can type their code and save that to the database. When I copy and paste from the visual studio environment, I would like to preserve the formating etc. So I have chosen to save the FlowDocuments Xaml to the database and set this back to the RichTextBox.Document. My below two function serialise and deserialise the RTB's contents. private string GetXaml(FlowDocument document) { if (document == null) return String.Empty; else{ StringBuilder sb = new StringBuilder(); XmlWriter xw = XmlWriter.Create(sb); XamlDesignerSerializationManager sm = new XamlDesignerSerializationManager(xw); sm.XamlWriterMode = XamlWriterMode.Expression; XamlWriter.Save(document, sm ); return sb.ToString(); } } private FlowDocument GetFlowDocument(string xamlText) { var flowDocument = new FlowDocument(); if (xamlText != null) flowDocument = (FlowDocument)XamlReader.Parse(xamlText); // Set return value return flowDocument; } However when I try to serialise and deserialise the following code, I am noticing this incorrect(?) behaviour using System; public class TestCSScript : MarshalByRefObject { } Serialised XAML is using System; public class TestCSScript : MarshalByRefObject {}{ } Notice the the new set of "{}" What am I doing wrong here... Thanks in advance for the help!

    Read the article

  • AndroMDA maven code generation and JPA Annotations

    - by ArsenioM
    I am using the AndroMDA plugin for maven to generate code from an uml diagram made in MagicDraw. When the code is generated, AndroMDA desings the JPA annotation for the persitence layer. I think that at the compilation process AndroMDA uses Naming Strategies to determine the Table and Column names for the DataBase. I want to determine how AndroMDA desings this JPA annotations, because I need to display this DataBase names based on the UML entity and atributtes names. I was regarding if there is an API of AndroMDA that I could use to do this by giving it the uml diagram. Or at least, to know the Naming Strategies used by AndroMDA to achive that. AndroMDA at the compilation process design the JPA annotations for the Entities, Attributes, etc that are written in my java classes under a series of rules that exist within the EJB3 cartridge of AndroMDA. (The further Database is created using those JPA annotations). I want to create a program that returns me the same Table and Attributes names wrote on the JPA annotations, by giving it the .xml file of the uml diagram of a project. I was hoping that I could take advantage of the EJB3 cartridge to generate those Tables and Attribute names with my program. One way could be using an API of AndroMDA that do this(if it exits), or at least, by implementing the same rules used by the EJB3 cartridge for that matter. To be more illustrative, For example: If in my uml model I have an Entity called “CompanyGroup”, AndroMDA would generate the following code for the class definition: @javax.persistence.Entity @javax.persistence.Table(name = "COMPANY_GR") Public class CompanyGroup implements java.io.Serializable, Comparable< CompanyGroup This is just an example (not a real case), but nevertheless, the way how AndroMDA do the translation from “CompanyGroup” to “COMPANY_GR” has to be specified somewhere. Hope this explanation is useful enough. Thanks.

    Read the article

  • DataGridView row is still dirty after committing changes

    - by Ecyrb
    DataGridView.IsCurrentRowDirty remains true after I commit changes to the database. I want to set it to false so it doesn't trigger RowValidating when it loses focus. I have a DataGridView bound to a BindingList<T>. I handle the CellEndEdit event and save changes to the database. After saving those changes I would like DataGridView.IsCurrentRowDirty to be set to true, since all cells in the row are up-to-date; however, it's set to false. This causes problems for me because when the row does lose focus it will trigger RowValidating, which I handle and validate all three cells in. So even though all the cells are valid and none are dirty it will still validate them all. That's a waste. Here's an example of what I have: void dataGridView_CellValidating(object sender, DataGridViewCellValidatingEventArgs e) { // Ignore cell if it's not dirty if (dataGridView.isCurrentCellDirty) return; // Validate current cell. } void dataGridView_RowValidating(object sender, DataGridViewCellCancelEventArgs e) { // Ignore Row if it's not dirty if (!dataGridView.IsCurrentRowDirty) return; // Validate all cells in the current row. } void dataGridView_CellEndEdit(object sender, DataGridViewCellEventArgs e) { // Validate all cells in the current row and return if any are invalid. // If they are valid, save changes to the database // This is when I would expect dataGridView.IsCurrentRowDirty to be false. // When this row loses focus it will trigger RowValidating and validate all // cells in this row, which we already did above. } I've read posts that said I could call the form's Validate() method, but that will cause RowValidating to fire, which is what I'm trying to avoid. Any idea how I can set DataGridView.IsCurrentRowDirty to true? Or maybe a way to prevent RowValidating from unnecessarily validating all the cells?

    Read the article

  • Problem loading Oracle client libraries when running in a NAnt build

    - by Chris Farmer
    I am trying to use dbdeploy to manage Oracle schema changes. I can run it successfully from the command line to get it to generate my change scripts, but when I try to execute it via the dbdeploy NAnt task running through TeamCity, I get an error: System.Data.OracleClient requires Oracle client software version 8.1.7 or greater. I do have the Oracle 10.2.0.2 client software installed. It's the first entry in the system path, and the dbdeploy.exe app is able to successfully negotiate an Oracle connection. The dbdeploy code dynamically loads the System.Data.OracleClient assembly, which in-turn tries to use the Oracle client bits to talk to the database. This is what is failing in my NAnt environment. I have verified the following points: The same user identity is running the process in both cases The same working directory is used in both cases The same dbdeploy code is running in both cases and with the same supplied parameters The same database connection string is being used in both cases The same ADO.NET assembly is being dynamically loaded in both cases (System.Data.OracleClient, Version=1.0.5000.0, Culture=neutral, PublicKeyToken=b77a5c561934e089) Here's the top of the stack trace during the error: at System.Data.OracleClient.OCI.DetermineClientVersion() at System.Data.OracleClient.OracleInternalConnection.OpenOnLocalTransaction (String userName, String password, String serverName, Boolean integratedSecurity, Boolean unicode, Boolean omitOracleConnectionName) at System.Data.OracleClient.OracleInternalConnection..ctor( OracleConnectionString connectionOptions) at System.Data.OracleClient.OracleConnectionFactory.CreateConnection( DbConnectionOptions options, Object poolGroupProviderInfo, DbConnectionPool pool, DbConnection owningObject) at System.Data.ProviderBase.DbConnectionFactory.CreatePooledConnection( DbConnection owningConnection, DbConnectionPool pool, DbConnectionOptions options) at System.Data.ProviderBase.DbConnectionPool.CreateObject( DbConnection owningObject) at System.Data.ProviderBase.DbConnectionPool.UserCreateRequest( DbConnection owningObject) at System.Data.ProviderBase.DbConnectionPool.GetConnection( DbConnection owningObject) at System.Data.ProviderBase.DbConnectionFactory.GetConnection( DbConnection owningConnection) at System.Data.ProviderBase.DbConnectionClosed.OpenConnection( DbConnection outerConnection, DbConnectionFactory connectionFactory) at System.Data.OracleClient.OracleConnection.Open() at Net.Sf.Dbdeploy.Database.DatabaseSchemaVersionManager. GetCurrentVersionFromDb() My main question is this: how can I discover what's different about these running environments to see why my Oracle client software can't be loaded?

    Read the article

  • Orca: extracting files from merge module

    - by Mystagogue
    All I want is a command-line tool that can extract files from a merge module (.msm) onto disk. I looked up Orca (version 3.1), whose documentation states: Many merge module options can be specified from the command line... Extracting Files from a Merge Module Orca supports three different methods for extracting files contained in a merge module. Orca can extract the individual CAB file, extract the files into a module tree and extract the files into a source image once it has been merged into a target database... Extracting Files To extract the individual files from a merge module, use the ... -x ... option on the command line, where is the desired path to the new directory tree. The specified path is used as the root path for the extracted files. All files are extracted from the CAB file embedded in the module and placed in the specified path. The directory layout for the extracted files is based on the directory tree of the merge module. It mostly sounds like exactly what I need. But when I try it, orca simply opens up an editor (with info on the msm I specified) and then does nothing. I've tried a variety of command lines: orca -x theDirectory theModule.msm orca theModule.msm -x theDirectory ...and others. I get nowhere. The closest I've gotten was this: orca -q -x theDirectory -m theModule.msm ...but then it complains that I didn't specifiy a database to merge into. But I'm not trying to merge anything, no less into a database. I just want the files extracted. Can someone explain what I'm doing wrong with the command line options?

    Read the article

< Previous Page | 516 517 518 519 520 521 522 523 524 525 526 527  | Next Page >