Search Results

Search found 8523 results on 341 pages for 'bobby tables'.

Page 148/341 | < Previous Page | 144 145 146 147 148 149 150 151 152 153 154 155  | Next Page >

  • Habanero

    - by csharp-source.net
    An Enterprise Application Framework for .Net that is ideally suited for developing applications in an agile manner. The framework is used for producing an application from the data layer through to the front-end. Free open source under the LGPL license, it includes ORM, code generation and runtime UI generation to create one application for the desktop & web. Features: * ORM: Map database tables to objects in code * Persist property values to and from the database * Define all mapping in a single XML file * Switch between database vendors with one setting * Support for MySQL, MS Sql Server, MS Access, Oracle, PostgreSQL, SQLite, Firebird * FireStarter GUI class definitions xml manager * Generate user interfaces and map properties to controls * Develop for both desktop (with Windows Forms) and web (with Gizmox' Visual WebGUI) * Generate new projects and code files * Generate UI forms from templates * Reverse engineer class definitions from existing databases * Support variable data sources, including an in-memory database. Ships with Firestarter a free database reverse engineering, Domain Modelling and Code Generator.

    Read the article

  • Adoption of Exadata - Gartner research note

    - by Javier Puerta
    Independent research note by Gartner acknowledges Oracle Exadata Database Machine has achieved significant early adoption and acceptance of its database appliance value proposition. Analyst Merv Adrian looks at some of the main issues that IT professionals have solved as they assess or deploy the Oracle Exadata solution, including: OLTP and DSS workload support workload consolidation increasing performance and scalability demands data compression improvements  Gartner reports clients using Oracle Exadata experienced the following: report significant performance improvements substantial amounts of cache memory which greatly improves processing speed Oracle Advanced Compression providing 2-4X data compression delivering significant reductions in storage requirements and driving shorter times for backup operations Tables compressed with Oracle Advanced Compression automatically recompress as data is added/updated. One client specifically reported consolidating more than 400 applications onto the Oracle Exadata platform Read the full Gartner note

    Read the article

  • I want a trivial example of where MongoDB can scale but a relational database will have trouble

    - by Ryan Weir
    I'm just learning to use MongoDB, and when discussing with other programmers would like a quick example of why NoSQL can be a good choice compared to a traditional RDBMS - however the scenarios I come up with and can find online seem pretty contrived. E.g. a blog with lots of traffic could be represented relationally, but will require some performance tuning and joins across tables (assuming full denormalization is being used). Whereas MongoDB would allow direct retrieval from one collection to the same effect. But the response I'm getting from other programmers is "why not just keep it relational and then add some trivial caching later?" Does anybody have a less contrived example where MongoDB will really shine and a relational db will fall over much quicker? The smaller the project/system the better, because it leaves less room for disagreement. Something along the lines of the complexity of the blog example would be really useful. Thanks.

    Read the article

  • At what size of data does it become beneficial to move from SQL to NoSQL?

    - by wobbily_col
    As a relational database programmer (most of the time), I read articles about how relational databases don't scale, and NoSQL solutions such as MongoDB do. As most of the databases I have developed so far have been small to mid scale, I have never had a problem that hasn't been solved by some indexing, query optimization or schema redesign. What sort of size would I expect to see MySQL struggling with. How many rows? (I know this is going to depend on the application, and type of data stored. the one that got me thing was basically a genetics database, so would have one main table, with 3 or 4 lookup tables. The main table will contain amongst other things, a chromosome reference, and a position coordinate. It will likely get queried for a number of entries between two potions on a chromosome, to see what is stored there).

    Read the article

  • ASP.NET C# - do you need a separate datasource for each gridview? [closed]

    - by Brian McCarthy
    Do you need a separate datasource for each gridview if each gridview is accessing the same database but different tables in the database? I'm getting an error on AppSettings that says non-invocable member. What is the problem with it? Here's the c# code-behind: protected void Search_Zip_Plan_Age_Button_Click(object sender, EventArgs e) { var _with1 = this.ZipPlan_SqlDataSource; _with1.SelectParameters.Clear(); _with1.ConnectionString = System.Configuration.ConfigurationManager.AppSettings("PriceFinderConnectionString").ToString; _with1.SelectCommand = "ssp_get_zipcode_plan"; _with1.SelectParameters.Add("ZipCode", this.ZipCode.Text); _with1.SelectParameters.Add("PlanCode", this.PlanCode.Text); _with1.SelectParameters.Add("Age", this.Age.Text); _with1.SelectCommandType = SqlDataSourceCommandType.StoredProcedure; _with1.CancelSelectOnNullParameter = false; Search_Results_GridView.DataBind(); } thanks!

    Read the article

  • EPM 11.1.2 - Issues during configuration when using Oracle DB if not using UTF8

    - by Ahmed A
    If you see issues during configuration when using Oracle DB if not using UTF8: Workaround: a. During configuration of EPM products, a warning message is displayed if the Oracle DB is not UTF8 enabled. If you continue with the configuration, certain products will not work as they will not be able to read the contents in the tables as the format is wrong.b. The Oracle DB must be setup to use AL32UTF8 or a superset that contains AL32UTF8. c. The only difference between AL32UTF8 and UTF8 character sets is that AL32UTF8 stores characters beyond U+FFFF as four bytes (exactly as Unicode defines UTF-8). Oracle’s “UTF8” stores these characters as a sequence of two UTF-16 surrogate characters encoded using UTF-8 (or six bytes per character). Besides this storage difference, another difference is better support for supplementary characters in AL32UTF8 character set.

    Read the article

  • Mapping Help in the EDM Designer

    The mapping details window that displays the mappings between an entity and database table(s) is pretty straightforward. When you join two related tables in a Table Per Hierarchy inheritance things can get a little confusing when it comes to the mappings for inherited properties. But did you know that the Mapping Details window uses the Properties window to help? Here are two entities in a TPH hierarchy. Customer inherits Contact. Customer maps to a Customers table which uses ContactID as...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Keeping up to date with PeopleSoft Global Payroll Australia legislation

    - by Carolyn Cozart
    The Temporary Flood and Cyclone Reconstruction levy (flood levy) will now apply to individuals for the 2011-2012 year. Tax Laws Amendment Bill 2011 was tabled in parliament in February 2011 and received royal assent in April 2011. The tax tables, however, were released last week in May 2011. To find out  the details of what is changing in Global Payroll Australia as well as targeted delivery dates, please visit the Knowledge Center on Support.Oracle.com. Click on the Knowledge tab. Simply type in keywords ‘Global Payroll Australia Position’. If further amendments are made, we will revise the document accordingly. Let the Oracle/PeopleSoft team help reduce the stress and anxiety of these changing times by staying informed. PeopleSoft is working hard to get you the information you need. The information is just a few clicks away.

    Read the article

  • Can I import an existing member data used in old ASP to a new ASP.NET membership database? [closed]

    - by Rick Brown
    I have an old website that I designed and still maintain using old ASP that has a membership database (MS-SQL) that I built from scratch. It is a very simple database that has all the user information in one table (including login info and personal info) and then details and other odds and ends in other tables. It is WAY past time to upgrade this to .NET, especially since I need to add a Paypal payment system into it as soon as I can. I've designed several other sites with membership in .NET, but they have all been from scratch. Is there an easy way to transition from the old ASP site to a new .NET membership database without losing the data? There are hundreds of users with thousands of records relating to those users that I'd rather not lose, if possible. Any ideas on a relatively painless way to do this?

    Read the article

  • Part 8: How to name EBS Customizations

    - by volker.eckardt(at)oracle.com
    You might wonder why I am discussing this here. The reason is simple: nearly every project has a bit different naming conventions, which makes a the life always a bit complicated (for developers, but also setup responsible, and also for consultants).  Although we always create a document to describe the technical object naming conventions, I have rarely seen a dedicated document  with functional naming conventions. To be precisely, from my stand point, there should always be one global naming definition for an implementation! Let me discuss some related questions: What is the best convention for the customization reference? How to name database objects (tables, packages etc.)? How to name functional objects like Value Sets, Concurrent Programs, etc. How to separate customizations from standard objects best? What is the best convention for the customization reference? The customization reference is the key you use to reference your customization from other lists, from the project plan etc. Usually it is something like XXHU_CONV_22 (HU=customer abbreviation, CONV=Conversion object #22) or XXFA_DEPRN_RPT_02 (FA=Fixed Assets, DEPRN=Short object group, here depreciation, RPT=Report, 02=2nd report in this area) As this is just a reference (not an object name yet), I would prefer the second option. XX=Customization, FA=Main EBS Module linked (you may have sometimes more, but FA is the main) DEPRN_RPT=Short name to specify the customization 02=a unique number Important here is that the HU isn’t used, because XX is enough to mark a custom object, and the 3rd+4th char can be used by the EBS module short name. How to name database objects (tables, packages etc.)? I was leading different developer teams, and I know that one common way is it to take the Customization reference and add more chars behind to classify the object (like _V for view and _T1 for triggers etc.). The only concern I have with this approach is the reusability. If you name your view XXFA_DEPRN_RPT_02_V, no one will by choice reuse this nice view, as it seams to be specific for this CEMLI. My suggestion is rather to name the view XXFA_DEPRN_PERIODS_V and allow herewith reusability for other CEMLIs (although the view will be deployed primarily with CEMLI package XXFA_DEPRN_RPT_02). (check also one of the following Blogs where I will talk about deployment.) How to name Value Sets, Concurrent Programs, etc. For Value Sets I would go with the same convention as for database objects, starting with XX<Module> …. For Concurrent Programs the situation is a bit different. This “object” is seen and used by a lot of users, and they will search for. In many projects it is common to start again with the company short name, or with XX. My proposal would differ. If you have created your own report and you name it “XX: Invoice Report”, the user has to remember that this report does not start with “I”, it starts with X. Would you like typing an X if you are looking for an Invoice report? No, you wouldn’t! So my advise would be to name it:   “Invoice Report (XXAP)”. Still we know it is custom (because of the XXAP), but the end user will type the key “i” to get it (and will see similar reports starting also with “i”). I hope that the general schema behind has now become obvious. How to separate customizations from standard objects best? I would not have this section here if the naming would not play an important role. Unfortunately, we can not always link a custom application to our own object, therefore the naming is really important. In the file system structure we use our $XXyy_TOP, in JAVA_TOP it is perhaps also “xx” in front. But in the database itself? Although there are different concepts in place, still many implementations are using the standard “apps” approach, means custom objects are stored in the apps schema (which should not cause any trouble). Final advise: review the naming conventions regularly, once a month. You may have to add more! And, publish them! To summarize: Technical and functional customized objects should always follow a naming convention. This naming convention should be project wide, and only one place shall be used to maintain (like in a Wiki). If the name is for the end user, rather put a customization identifier at the end; if it is an internal name, start with XX…

    Read the article

  • Perspective in Modeling

    - by drsql
    Your task, model a database that represents a suburban block.  You survey the area, and see the following houses (pictures culled from Wikipedia here ) and So you look at the houses, start modeling roofs, windows, lawn, driveway, mail boxes, porches, etc etc. You get done, and with your 30+ tables you are feeling great, right? I know I would be. “I knocked this out of the park! We can capture everything about these houses.  I…am…a…superhero database modeler,” I think, “I will get a big...(read more)

    Read the article

  • Why is database developer pay so high? [closed]

    - by user433500
    Just wondering why someone would get 10k+ in some area in US for just writing queries and creating tables. While the average salary for someone who does scripting, object oriented programming, J2EE and database all together is only ~12K in new york city. Is there similar opportunities in cities like new york where only doing database gets one 10K+? What is the rational of companies paying such a high salary to consultants for just writing simple queries? I am sure college grad can do that with ease and will be quite satisfied with a 60k+ pay for a couple of year. Does location really matter so much?

    Read the article

  • ROracle support for TimesTen In-Memory Database

    - by Sherry LaMonica
    Today's guest post comes from Jason Feldhaus, a Consulting Member of Technical Staff in the TimesTen Database organization at Oracle.  He shares with us a sample session using ROracle with the TimesTen In-Memory database.  Beginning in version 1.1-4, ROracle includes support for the Oracle Times Ten In-Memory Database, version 11.2.2. TimesTen is a relational database providing very fast and high throughput through its memory-centric architecture.  TimesTen is designed for low latency, high-volume data, and event and transaction management. A TimesTen database resides entirely in memory, so no disk I/O is required for transactions and query operations. TimesTen is used in applications requiring very fast and predictable response time, such as real-time financial services trading applications and large web applications. TimesTen can be used as the database of record or as a relational cache database to Oracle Database. ROracle provides an interface between R and the database, providing the rich functionality of the R statistical programming environment using the SQL query language. ROracle uses the OCI libraries to handle database connections, providing much better performance than standard ODBC.The latest ROracle enhancements include: Support for Oracle TimesTen In-Memory Database Support for Date-Time using R's POSIXct/POSIXlt data types RAW, BLOB and BFILE data type support Option to specify number of rows per fetch operation Option to prefetch LOB data Break support using Ctrl-C Statement caching support Times Ten 11.2.2 contains enhanced support for analytics workloads and complex queries: Analytic functions: AVG, SUM, COUNT, MAX, MIN, DENSE_RANK, RANK, ROW_NUMBER, FIRST_VALUE and LAST_VALUE Analytic clauses: OVER PARTITION BY and OVER ORDER BY Multidimensional grouping operators: Grouping clauses: GROUP BY CUBE, GROUP BY ROLLUP, GROUP BY GROUPING SETS Grouping functions: GROUP, GROUPING_ID, GROUP_ID WITH clause, which allows repeated references to a named subquery block Aggregate expressions over DISTINCT expressions General expressions that return a character string in the source or a pattern within the LIKE predicate Ability to order nulls first or last in a sort result (NULLS FIRST or NULLS LAST in the ORDER BY clause) Note: Some functionality is only available with Oracle Exalytics, refer to the TimesTen product licensing document for details. Connecting to TimesTen is easy with ROracle. Simply install and load the ROracle package and load the driver. > install.packages("ROracle") > library(ROracle) Loading required package: DBI > drv <- dbDriver("Oracle") Once the ROracle package is installed, create a database connection object and connect to a TimesTen direct driver DSN as the OS user. > conn <- dbConnect(drv, username ="", password="", dbname = "localhost/SampleDb_1122:timesten_direct") You have the option to report the server type - Oracle or TimesTen? > print (paste ("Server type =", dbGetInfo (conn)$serverType)) [1] "Server type = TimesTen IMDB" To create tables in the database using R data frame objects, use the function dbWriteTable. In the following example we write the built-in iris data frame to TimesTen. The iris data set is a small example data set containing 150 rows and 5 columns. We include it here not to highlight performance, but so users can easily run this example in their R session. > dbWriteTable (conn, "IRIS", iris, overwrite=TRUE, ora.number=FALSE) [1] TRUE Verify that the newly created IRIS table is available in the database. To list the available tables and table columns in the database, use dbListTables and dbListFields, respectively. > dbListTables (conn) [1] "IRIS" > dbListFields (conn, "IRIS") [1] "SEPAL.LENGTH" "SEPAL.WIDTH" "PETAL.LENGTH" "PETAL.WIDTH" "SPECIES" To retrieve a summary of the data from the database we need to save the results to a local object. The following call saves the results of the query as a local R object, iris.summary. The ROracle function dbGetQuery is used to execute an arbitrary SQL statement against the database. When connected to TimesTen, the SQL statement is processed completely within main memory for the fastest response time. > iris.summary <- dbGetQuery(conn, 'SELECT SPECIES, AVG ("SEPAL.LENGTH") AS AVG_SLENGTH, AVG ("SEPAL.WIDTH") AS AVG_SWIDTH, AVG ("PETAL.LENGTH") AS AVG_PLENGTH, AVG ("PETAL.WIDTH") AS AVG_PWIDTH FROM IRIS GROUP BY ROLLUP (SPECIES)') > iris.summary SPECIES AVG_SLENGTH AVG_SWIDTH AVG_PLENGTH AVG_PWIDTH 1 setosa 5.006000 3.428000 1.462 0.246000 2 versicolor 5.936000 2.770000 4.260 1.326000 3 virginica 6.588000 2.974000 5.552 2.026000 4 <NA> 5.843333 3.057333 3.758 1.199333 Finally, disconnect from the TimesTen Database. > dbCommit (conn) [1] TRUE > dbDisconnect (conn) [1] TRUE We encourage you download Oracle software for evaluation from the Oracle Technology Network. See these links for our software: Times Ten In-Memory Database,  ROracle.  As always, we welcome comments and questions on the TimesTen and  Oracle R technical forums.

    Read the article

  • Concurrency checking with Last Change Time

    - by Lijo
    I have a following three tables Email (emailNumber, Address) Recipients (reportNumber, emailNumber, lastChangeTime) Report (reportNumber, reportName) I have a C# application that uses inline queries for data selection. I have a select query that selects all reports and their Recipients. Recipients are selected as comma separacted string. During updating, I need to check concurrency. Currently I am using MAX(lastChangeTime) for each reportNumber. This is selected as maxTime. Before update, it checks that the lastChangeTime <= maxTime. --//It works fine One of my co-developers asked why not use GETDATE() as “maxTime” rather than using a MAX operation. That is also working. Here what we are checking is the records are not updated after the record selection time. Is there any pitfalls in using GETDATE() for this purpose?

    Read the article

  • Insane load average after reboot

    - by Gazzer
    After doing a reboot of Ubuntu server 12.04 LTS (after an apt-get dist-upgrade) my server load (on a 16GB) machine goes insane (around 80) for about 10 or 15 minutes The only things I can think of are these two processes: /usr/bin/mysql --defaults-file=/etc/mysql/debian.cnf --skip-column-names --batch -e ? select concat('select count(*) into @discard from `',? TABLE_SCHEMA, '`.`', TABLE_NAME, '`') ? from information_schema.TABLES where ENGINE='MyISAM' /usr/bin/mysql --defaults-file=/etc/mysql/debian.cnf --skip-column-names --silent --batch --force -e select count(*) into @discard from `information_schema`.`PARTITIONS` Is this normal?

    Read the article

  • Microsoft Access 2010: How to Use the Report Wizard

    If you have used Microsoft products other selections of easy to use software in the past, you have probably come across a wizard. A wizard helps you complete tasks with ease, acting as a guide along the way. This particular tutorial will concentrate on Microsoft Access 2010's Report Wizard, which is a useful tool that makes dealing with reports as easy as can be. To showcase what the Report Wizard in Microsoft Access 2010 can do for you, we are going to create a report that is characterized by multiple tables. The overall process is easy, and we will detail the necessary steps to complete i...

    Read the article

  • Q&amp;A: Will my favourite ORM Foo work with SQL Azure?

    - by Eric Nelson
    short answer: Quite probably, as SQL Azure is very similar to SQL Server longer answer: Object Relational Mappers (ORMs) that work with SQL Server are likely but not guaranteed to work with SQL Azure. The differences between the RDBMS versions are small – but may cause problems, for example in tools used to create the mapping between objects and tables or in generated SQL from the ORM which expects “certain things” :-) More specifically: ADO.NET Entity Framework / LINQ to Entities can be used with SQL Azure, but the Visual Studio designer does not currently work. You will need to point the designer at a version of your database running of SQL Server to create the mapping, then change the connection details to run against SQL Azure. LINQ to SQL has similar issues to ADO.NET Entity Framework above NHibernate can be used against SQL Azure DevExpress XPO supports SQL Azure from version 9.3 DataObjects.Net supports SQL Azure Open Access from Telerik works “seamlessly”  - their words not mine :-) The list above is by no means comprehensive – please leave a comment with details of other ORMs that work (or do not work) with SQL Azure. Related Links: General guidelines and limitations of SQL Azure SQL Azure vs SQL Server

    Read the article

  • Rebuilding CoasterBuzz, Part II: Hot data objects

    - by Jeff
    This is the second post, originally from my personal blog, in a series about rebuilding one of my Web sites, which has been around for 12 years. More: Part I: Evolution, and death to WCF After the rush to get moving on stuff, I temporarily lost interest. I went almost two weeks without touching the project, in part because the next thing on my backlog was doing up a bunch of administrative pages. So boring. Unfortunately, because most of the site's content is user-generated, you need some facilities for editing data. CoasterBuzz has a database full of amusement parks and roller coasters. The entities enjoy the relationships that you would expect, though they're further defined by "instances" of a coaster, to define one that has moved between parks as one, with different names and operational dates. And of course, there are pictures and news items, too. It's not horribly complex, except when you have to account for a name change and display just the newest name. In all previous versions, data access was straight SQL. As so much of the old code was rooted in 2003, with some changes in 2008, there wasn't much in the way of ORM frameworks going on then. Let me rephrase that, I mostly wasn't interested in ORM's. Since that time, I used a little LINQ to SQL in some projects, and a whole bunch of nHibernate while at Microsoft. Through all of that experience, I have to admit that these frameworks are often a bigger pain in the ass than not. They're great for basic crud operations, but when you start having all kinds of exotic relationships, they get difficult, and generate all kinds of weird SQL under the covers. The black box can quickly turn into a black hole. Sometimes you end up having to build all kinds of new expertise to do things "right" with a framework. Still, despite my reservations, I used the newer version of Entity Framework, with the "code first" modeling, in a science project and I really liked it. Since it's just a right-click away with NuGet, I figured I'd give it a shot here. My initial effort was spent defining the context class, which requires a bit of work because I deviate quite a bit from the conventions that EF uses, starting with table names. Then throw some partial querying of certain tables (where you'll find image data), and you're splitting tables across several objects (navigation properties). I won't go into the details, because these are all things that are well documented around the Internet, but there was a minor learning curve there. The basics of reading data using EF are fantastic. For example, a roller coaster object has a park associated with it, as well as a number of instances (if it was ever relocated), and there also might be a big banner image for it. This is stupid easy to use because it takes one line of code in your repository class, and by the time you pass it to the view, you have a rich object graph that has everything you need to display stuff. Likewise, editing simple data is also, well, simple. For this goodness, thank the ASP.NET MVC framework. The UpdateModel() method on the controllers is very elegant. Remember the old days of assigning all kinds of properties to objects in your Webforms code-behind? What a time consuming mess that used to be. Even if you're not using an ORM tool, having hydrated objects come off the wire is such a time saver. Not everything is easy, though. When you have to persist a complex graph of objects, particularly if they were composed in the user interface with all kinds of AJAX elements and list boxes, it's not just a simple matter of submitting the form. There were a few instances where I ended up going back to "old-fashioned" SQL just in the interest of time. It's not that I couldn't do what I needed with EF, it's just that the efficiency, both my own and that of the generated SQL, wasn't good. Since EF context objects expose a database connection object, you can use that to do the old school ADO.NET stuff you've done for a decade. Using various extension methods from POP Forums' data project, it was a breeze. You just have to stick to your decision, in this case. When you start messing with SQL directly, you can't go back in the same code to messing with entities because EF doesn't know what you're changing. Not really a big deal. There are a number of take-aways from using EF. The first is that you write a lot less code, which has always been a desired outcome of ORM's. The other lesson, and I particularly learned this the hard way working on the MSDN forums back in the day, is that trying to retrofit an ORM framework into an existing schema isn't fun at all. The CoasterBuzz database isn't bad, but there are design decisions I'd make differently if I were starting from scratch. Now that I have some of this stuff done, I feel like I can start to move on to the more interesting things on the backlog. There's a lot to do, but at least it's fun stuff, and not more forms that will be used infrequently.

    Read the article

  • What is the difference between all-static-methods and applying a singleton pattern?

    - by shahensha
    I am making a database to store information about the users of my website (I am using stuts2 and hence Java EE technology). For the database I'll be making a DBManager. Should I apply singleton pattern here or rather make all it's methods static? I will be using this DBManager for basic things like adding, deleting and updating User profiles. Along with it, I'll use for all other querying purposes, for instance to find out whether a username already exists and to get all users for administrative purposes and stuff like that. My questions What is the benefit of singleton pattern? Which thing is most apt here? All static methods or a singleton pattern? Please compare both of them. regards shahensha P.S. The database is bigger than this. Here I am talking only about the tables which I'll be using for storing User Information.

    Read the article

  • removing an ssrs instance from a scale-out deployment

    - by Alex Bransky
    If you're like me you had at one time connected one of your Reporting Services instances to a report server database that was already in use by another instance.  This allows the instance to show up in the Scale-out Deployment section of the Reporting Services Configuration Manager.  My problem was that the server that got joined to the original server was no longer available as it had been repurposed, and when I clicked Remove Server to remove it from my scale-out it would fail because it couldn't contact the server.  After searching for a solution for quite some time I decided to look around in the report server database tables, and voila!  All I had to do was remove the old server from the Keys table.  I can't guarantee there won't be any side effects to this method, but it worked like a charm for me.

    Read the article

  • Implementing set of processes in a stored procedure or through the code?

    - by just_name
    I want to know what's the suitable method to implement the following case (best practice). If i make a set of processes like this : 1- select data from set of DB tables. 2- loop on the selected result . 3- Make some checks on each iteration . 4- Insert the result in another table . Implementing the previous steps in a stored procedure or in a transaction through my code (asp.net) . ? Concerning the performance , security and reliability issues .

    Read the article

  • Join me to register for the Summit

    - by Bill Graziano
    This year the Summit registration opens at 6PM on Sunday at the Seattle convention center.  Last year we had a dozen people hanging out, watching the twitter feed on the big monitor and catching up.  All we really needed was a bar and we’d have our own little party going. So this year I’m adding a bar.  I’ve arranged for a cash bar and some stand up tables.  I’m buying the first round for the first 40 or so people that come by.  Come by, register and say Hi.  I’d especially like to encourage first-time attendees to stop by.  This is a low key way to meet some people that will be at the conference.

    Read the article

  • Breaking up a large PHP object used to abstract the database. Best practices?

    - by John Kershaw
    Two years ago it was thought a single object with functions such as $database->get_user_from_id($ID) would be a good idea. The functions return objects (not arrays), and the front-end code never worries about the database. This was great, until we started growing the database. There's now 30+ tables, and around 150 functions in the database object. It's getting impractical and unmanageable and I'm going to be breaking it up. What is a good solution to this problem? The project is large, so there's a limit to the extent I can change things. My current plan is to extend the current object for each table, then have the database object contain these. So, the above example would turn into (assume "user" is a table) $database->user->get_user_from_id($ID). Instead of one large file, we would have a file for every table.

    Read the article

  • Connecting remote mysql database to local mysql databse? [migrated]

    - by Shashank
    I want to write a php code to be embedded in drupal7 module. I want to call a procedure which can copy the newly generated data in local mysql database to the remote mysql database. When data is inserted in tables 'A' of my local data base it should be copied to the specific table 'B' of the remote mysql server's database. Table 'A' is on local host. Table 'B' is on remote server. insert data on 'A' - copied data in 'B' Is this possible? Thanks for the help.

    Read the article

  • Enzo Backup for SQL Azure Beta Released!

    - by ScottKlein
    Blue Syntax is happy to announce the release of their SQL Azure database backup product! Enzo Backup for SQL Azure offers unparalleled backup and restored functionlity and flexibility of a SQL Azure database. You can download the beta release here: http://www.bluesyntax.net/backup.aspx With Enzo Backup for SQL Azure, you can: Create a backup blob, or a backup file from a SQL Azure database Restore a SQL Azure database from a backup blob, or a backup file Perform limited backup and restore of SQL Server databases (see details) Run backups entirely in the cloud using a remote agent Backup a single schema of a database Restore specific tables only Copy backup devices from on-premise to the cloud Use a command-line utility to perform backup operations Perform transactionally consistent backups for SQL Azure Please download it and provide us your feed back!

    Read the article

< Previous Page | 144 145 146 147 148 149 150 151 152 153 154 155  | Next Page >