Search Results

Search found 31293 results on 1252 pages for 'database agnostic'.

Page 701/1252 | < Previous Page | 697 698 699 700 701 702 703 704 705 706 707 708  | Next Page >

  • Problem with SQL Server "EXECUTE AS"

    - by Vilx-
    I've got the following setup: There is a SQL Server DB with several tables that have triggers set on them (that collect history data). These triggers are CLR stored procedures with EXECUTE AS 'HistoryUser'. The HistoryUser user is a simple user in the database without a login. It has enough permissions to read from all tables and write to the history table. When I backup the DB and then restore it to another machine (Virtual Machine in this case, but it does not matter), the triggers don't work anymore. In fact, no impersonation for the user works anymore. Even a simple statement such as this exec ('select 3') as user='HistoryUser' produces an error: Cannot execute as the database principal because the principal "HistoryUser" does not exist, this type of principal cannot be impersonated, or you do not have permission. I read in MSDN that this can occur if the DB owner is a domain user, but it isn't. And even if I change it to anything else (their recommended solution) this problem remains. If I create another user without login, I can use it for impersonation just fine. That is, this works just fine: create user TestUser without login go exec ('select 3') as user='TestUser' I do not want to recreate all those triggers, so is there any way how I can make the existing HistoryUser work? Bump: Sorry, but this is kinda urgent...

    Read the article

  • ASP.NET Web Application: use 1 or multiple virtual directories

    - by tster
    I am working on a (largish) internal web application which has multiple modules (security, execution, features, reports, etc.). All the pages in the app share navigation, CSS, JS, controls, etc. I want to make a single "Web Application" project, which includes all the pages for the app, then references various projects which will have the database and business logic in them. However, some of the people on the project want to have separate projects for the pages of each module. To make this more clear, this is what I'm advocating to be the projects. /WebInterface* /SecurityLib /ExecutionLib etc... And here is what they are advocating: /SecurityInterface* /SecutiryLib /ExecutionInterface* /ExecutionLib etc... *project will be published to a virtual directory of IIS Basically What I'm looking for is the advantages of both approaches. Here is what I can think of so far: Single Virtual Directory Pros Modules can share a single MasterPage Modules can share UserControls (this will be common) Links to other modules are within the same Virtual directory, and thus don't need to be fully qualified. Less chance of having incompatible module versions together. Multiple Virtual Directories Pros Can publish a new version of a single module without disrupting other modules Module is more compartmentalized. Less likely that changes will break other modules. I don't buy those arguments though. First, using load balanced servers (which we will have) we should be able to publish new versions of the project with zero downtime assuming there are no breaking database changes. Second, If something "breaks" another module, then there is either an improper dependency or the break will show up eventually in the other module, when the developers copy over the latest version of the UserControl, MasterPage or dll. As a point of reference, there are about 10 developers on the project for about 50% of their time. The initial development will be about 9 months.

    Read the article

  • MySQL Can Connect Remotely but not Locally

    - by A Wizard Did It
    This is a weird problem and I'm not sure what's going on. I installed MySQL on a linux box I have running Ubuntu 10.04 LTS. I can access mysql via SSH mysql -p and perform all my commands that way. I added a user, and I can use AddedUser to connect remotely from my machine, but not from the local machine. It makes no sense to me... SELECT host, user FROM mysql.user Yields: +-----------+------------------+ | host | user | +-----------+------------------+ | % | AddedUser | | 127.0.0.1 | root | | li241-255 | root | | localhost | debian-sys-maint | | localhost | root | +-----------+------------------+ Problem is I'm developing on this machine using Node.js, and I can't connect locally from the server using the same username. I've tried FLUSH PRIVILEGES but that seems to have no effect. I know it's not Node.js because I'm using the same code on another database and it's working in that environment. Edit This is the error node is giving me. node.js:50 throw e; // process.nextTick error, or 'error' event on first tick ^ Error: ECONNREFUSED, Connection refused at Stream._onConnect (net.js:687:18) at IOWatcher.onWritable [as callback] (net.js:284:12) Edit 2 I have the right port & server as best I can tell. My /etc/mysql/my.cnf contains this: port = 3306 socket = /var/run/mysqld/mysqld.sock My MySQL object contains: { host: 'localhost', port: 3306, user: 'removed', password: 'removed', database: '', typeCast: true, flags: 260047, maxPacketSize: 16777216, charsetNumber: 192, debug: false, ending: false, connected: false, _greeting: null, _queue: [], _connection: null, _parser: null, server: 'ExternalIpAddress' } Possibly useful? netstat -ln | grep mysql unix 2 [ ACC ] STREAM LISTENING 1016418 /var/run/mysqld/mysqld.sock

    Read the article

  • Preventing ActiveRecord save() on an instance

    - by Craig Walker
    I have an ActiveRecord model object Foo; it represents a standard database row. I want to be able to display modified versions of instances of this object. I'd like to reuse the class itself, as it already has all the hooks & aspects I'll need. (For example: I already have a view that displays the appropriate attributes). Basically I want to clone the model instance, modify some of its properties, and feed it back to the caller (view, test, etc). I do not want these attribute modifications getting back into the database. However, I do want to include the id attribute in the cloned version, as it makes dealing with the route-helpers much easier. Thus, I plan on calling ActiveRecord::Base.clone(), manually setting the ID of the cloned instance, and then making the appropriate attribute changes to the new instance. This has me worried though; one save() on the modified instance and my original data will get clobbered. So, I'm looking to lock down the new instance so that it won't hurt anything else. I'm already planning on calling freeze() (on the understanding that this prevents further modification to the object, though the documentation isn't terribly clear). However, I don't see any obvious way to prevent a save(). What would be the best approach to achieving this?

    Read the article

  • Deleting While Iterating in Ruby?

    - by Jesse J
    I'm iterating over a very large set of strings, which iterates over a smaller set of strings. Due to the size, this method takes a while to do, so to speed it up, I'm trying to delete one of the strings from the smaller set that no longer needs to be used. Below is my current code: Ms::Fasta.foreach(@database) do |entry| all.each do |set| if entry.header[1..40].include? set[1] + "|" startVal = entry.sequence.scan_i(set[0])[0] if startVal != nil @locations << [set[0], set[1], startVal, startVal + set[1].length] all.delete(set) end end end end The problem I face is that the easy way, array.delete(string), effectively adds a break statement to the inner loop, which messes up the results. The only way I know how to fix this is to do this: Ms::Fasta.foreach(@database) do |entry| i = 0 while i < all.length set = all[i] if entry.header[1..40].include? set[1] + "|" startVal = entry.sequence.scan_i(set[0])[0] if startVal != nil @locations << [set[0], set[1], startVal, startVal + set[1].length] all.delete_at(i) i -= 1 end end i += 1 end end This feels kind of sloppy to me. Is there a better way to do this? Thanks.

    Read the article

  • Entity Framework POCO objects

    - by Dan
    I'm struggling with understanding Entity Framework and POCO objects. Here's what I'm trying to achieve. 1) Separate the DAL from the Business Layer by having my business layer use an interface to my DAL. Maybe use Unity to create my context. 2) Use Entity Framework inside my DAL. I have a Domain model with objects that reside in my business layer. I also have a database full of tables that doesn't really represent my domain model. I setup Entity Framework and generated POCO objects by using the ADO.NET POCO Generator extension. This gave me an object for each table in my database. Now I want to be able to say context.GetAll<User>(); and have it return a list of my User objects. The User object is in my business layer. Is that possible? Does that make sense or am I totally off and should start over? I'm guessing I need to use the repository pattern to achieve this, but I'm not sure. Can anyone help?

    Read the article

  • ORDER BY in a Sql Server 2008 view

    - by eidylon
    Hi all... we have a view in our database which has an ORDER BY in it. Now, I realize views generally don't order, because different people may use it for different things, and want it differently ordered. This view however is used for a VERY SPECIFIC use-case which demands a certain order. (It is team standings for a soccer league.) The database is Sql Server 2008 Express, v.10.0.1763.0 on a Windows Server 2003 R2 box. The view is defined as such: CREATE VIEW season.CurrentStandingsOrdered AS SELECT TOP 100 PERCENT *, season.GetRanking(TEAMID) RANKING FROM season.CurrentStandings ORDER BY GENDER, TEAMYEAR, CODE, POINTS DESC, FORFEITS, GOALS_AGAINST, GOALS_FOR DESC, DIFFERENTIAL, RANKING It returns: GENDER, TEAMYEAR, CODE, TEAMID, CLUB, NAME, WINS, LOSSES, TIES, GOALS_FOR, GOALS_AGAINST, DIFFERENTIAL, POINTS, FORFEITS, RANKING Now, when I run a SELECT against the view, it orders the results by GENDER, TEAMYEAR, CODE, TEAMID. Notice that it is ordering by TEAMID instead of POINTS as the order by clause specifies. However, if I copy the SQL statement and run it exactly as is in a new query window, it orders correctly as specified by the ORDER BY clause.

    Read the article

  • The question about the basics of LINQ to SQL working

    - by Alex
    I just started learning LINQ to SQL, and so far I'm impressed with the easy of use and good performance. I used to think that when doing LINQ queries like from Customer in DB.Customers where Customer.Age > 30 select Customer Get all customers from the database ("SELECT * FROM Customers"), move them to the Customers array and then make a search in that Array using .NET methods. This is very inefficient, what if there are hundreds of thousands of customers in the database? Making such big SELECT queries would kill the web application. Now after experiencing how actually fast LINQ to SQL is, I start to suspect that when doing that query I just wrote, LINQ somehow converts it to a SQL Query string SELECT * FROM Customers WHERE Age > 30 And only when necessary it will run the query. So my question is: am I right? And when is the query actually run? The reason why I'm asking is not only because I want to understand how it works in order to build good optimized applications, but because I came across the following problem. I have 2 tables, one of them is Books, the other has information on how many books were sold on certain days. My goal is to select books that had at least 50 sales/day in past 10 days. It's done with this simple query: from Book in DB.Books where (from Sale in DB.Sales where Sale.SalesAmount >= 50 and Sale.DateOfSale >= DateTime.Now.AddDays(-10) select Sale.BookID).Contains(Book.ID) select Book The point is, I have to use the checking part in several queries and I decided to create an array with IDs of all popular books: var popularBooksIDs = from Sale in DB.Sales where Sale.SalesAmount >= 50 and Sale.DateOfSale >= DateTime.Now.AddDays(-10) select Sale.BookID; BUT when I try to do the query now: from Book in DB.Books where popularBooksIDs.Contains(Book.ID) select Book It doesn't work! That's why I think that we can't use thins kinds of shortcuts in LINQ to SQL queries, like we can't use them in real SQL. We have to create straightforward queries, am I right?

    Read the article

  • Remote connection to SQL Server Express fails

    - by worlds-apart89
    I have two computers that share the same Internet IP address. Using one of the computers, I can remotely connect to a SQL Server database on the other. Here is my connection string: SqlConnection connection = new SqlConnection(@"Data Source=192.168.1.101\SQLEXPRESSNI,1433;Network Library=DBMSSOCN;Initial Catalog=FirstDB;Persist Security Info=True;User ID=username;Password=password;"); 192.168.1.101 is the server, SQLEXPRESSNI is the SQL Server instance name, and FirstDB is the name of the database. Now, I have another computer with a different Internet IP address. I want to connect to the server above using the third computer that does not belong to my local area network. I dont have access to that third computer at the moment, so I want to use (if possible) the client computer in LAN again. SqlConnection connection = new SqlConnection(@"Data Source=SharedInternetIP\SQLEXPRESSNI,1433;Network Library=DBMSSOCN;Initial Catalog=FirstDB;Persist Security Info=True;User ID=username;Password=password;"); Does not work Note that I am a beginner, so I am not quite sure what I am doing even though I know what I want to do. By passing the Internet IP to the SqlConnection object rather than the local IP address, how can I successfully connect to the server computer, using the client computer in the same network? Also note that my ultimate goal is to connect to the server with an external client, but I don't have access to that computer right now. I'd appreciate any help.

    Read the article

  • association of more than one model to a listview

    - by Veer
    I have 3 Tables in my database. Each table has 3 fields each, excluding the ID field. out of which 2 fields are of type nvarchar. None of the tables are related. My ListView in the application helps the user to search my database, the search being incremental. The search includes the nvarchar fields of the 3 tables ie, 6 fields in total. Eg: PhoneBook: Name, PhoneNo Notes: Title, Content Bookmarks: Name, url I've the models generated for the 3 tables. Now the ListBox should display the Ph.Name, Title and the Bo.Name fields. ie, It should be bound to them. But they are from different models. I also should be able to perform CRUD operation on the item searched. How would i do that? P.S: Separate ViewModels are created for each Model which are used for their respective views for handling those tables individually. But this is an integrated view where the user should be able to search everything. Also please somebody suggest me a better Title for this question:)

    Read the article

  • Java "Pool" of longs or Oracle sequence with reusable values

    - by Anthony Accioly
    Several months ago I implemented a solution to choose unique values from a range between 1 and 65535 (16 bits). This range is used to generate unique Route Targets suffixes, which for this customer massive network (it's a huge ISP) are a very disputed resource, so any free index needs to become immediately available to the end user. To tackle this requirement I used a BitSet. Allocate on the RT index with set and deallocate a suffix with clear. The method nextClearBit() can find the next available index. I handle synchronization / concurrency issues manually. This works pretty well for a small range... The entire index is small (around 10k), it is blazing fast and can be easy serialized into a Blob field. The problem is, some new devices can handle RTs of 32 bits (range 1 / 4294967296). Which can't be managed with a BitSet (it would, by itself, consume around 600Mb, plus be limited to int range). Even with this massive range available, the client still wants to free available Route Targets for the end user, mainly because the lowest ones (up to 65535) - which are compatible with old routers - are being heavily disputed. Before I tell the customer that this is impossible and he will have to conform with my reusable index for lower RTs (up to 65550) and use a database sequence for the other ones (which means that when the user frees a Route Target, it will not become available again). Would anyone shed some light? Maybe some kind soul already implemented a high performance number pool for Java (6 if it matters), or I am missing a killer feature of Oracle database (11R2 if it matters)... Wishful thinking. Thank you very much in advance.

    Read the article

  • how to redirect user depending on user type at time of login (codeignitor)

    - by Anam Tahir
    im facing problem while redirecting my user according to its type. how can do it here's my code plz suggest how to do it. <?php if ( ! defined('BASEPATH')) exit('No direct script access allowed'); class VerifyLogin extends CI_Controller { function __construct() { parent::__construct(); } function index() { $this->load->model('user','',TRUE); //This method will have the credentials validation $this->load->library('form_validation'); $this->load->library('session'); $this->form_validation->set_rules('username', 'Username','trim|required|xss_clean'); $this->form_validation->set_rules('password', 'Password' 'trim|required|xss_clean|callback_check_database'); if($this->form_validation->run() == FALSE) { //Field validation failed.&nbsp; User redirected to login page validation_errors(); $this->load->view('error'); } else { //Go to private area basically here i want to redirect if user is admin then redirect to admin page else redirect to home how can i do this ??? redirect('home', 'refresh'); } } function check_database($password) { //Field validation succeeded.&nbsp; Validate against database $username = $this->input->post('username'); //query the database $result = $this->user->login($username, $password); if($result) { $sess_array = array(); foreach($result as $row) { $sess_array = array( 'id' => $row->id, 'username' => $row->username ); $this->session->set_userdata('logged_in', $sess_array); } return TRUE; } else { $this->form_validation->set_message('check_database', 'Invalid username or password'); return false; } } } ?>

    Read the article

  • Check if date is allowed weekday in php?

    - by moogeek
    Hello! I'm stuck with a problem how to check if a specific date is within allowed weekdays array in php. For example, function dateIsAllowedWeekday($_date,$_allowed) { if ((isDate($_date)) && (($_allowed!="null") && ($_allowed!=null))){ $allowed_weekdays=json_decode($_allowed); $weekdays=array(); foreach($allowed_weekdays as $wd){ $weekday=date("l",date("w",strtotime($wd))); array_push($weekdays,$weekday); } if(in_array(date("l",strtotime($_date)),$weekdays)){return TRUE;} else {return FALSE;} } else {return FALSE;} } ///////////////////////////// $date="21.05.2010"; $wd="[0,1,2]"; if(dateIsAllowedWeekday($date,$wd)){echo "$date is within $wd weekday values!";} else{echo "$date isn't within $wd weekday values!"} I have input dates formatted as "d.m.Y" and an array returned from database with weekday numbers (formatted as 'Numeric representation of the day of the week') like [0,1,2] - (Sunday,Monday,Tuesday). The returned string from database can be "null", so i check it too. Them, the isDate function checks whether date is a date and it is ok. I want to check if my date, for example 21.05.2010 is an allowed weekday in this array. My function always returns TRUE and somehow weekday is always 'Thursday' and i don't know why... Is there any other ways to check this or what can be my error in the code above? thx

    Read the article

  • Table not created by Hibernate

    - by User1
    I annotated a bunch of POJO's so JPA can use them to create tables in Hibernate. It appears that all of the tables are created except one very central table called "Revision". The Revision class has an @Entity(name="RevisionT") annotation so it will be renamed to RevisionT so there is not a conflict with any reserved words in MySQL (the target database). I delete the entire database, recreate it and basically open and close a JPA session. All the tables seem to get recreated without a problem. Why would a single table be missing from the created schema? What instrumentation can be used to see what Hibernate is producing and which errors? Thanks. UPDATE: I tried to create as a Derby DB and it was successful. However, one of the fields has a a name of "index". I use @org.hibernate.annotations.IndexColumn to specify the name to something other than a reserved word. However, the column is always called "index" when it is created. Here's a sample of the suspect annotations. @ManyToOne @JoinColumn(name="MasterTopID") @IndexColumn(name="Cx3tHApe") protected MasterTop masterTop; Instead of creating MasterTop.Cx3tHApe as a field, it creates MasterTop.Index. Why is the name ignored?

    Read the article

  • Is it "right" to translate error messages?

    - by Iraklis
    This is somehow subjective depending on the target translation language, but bear with me for a sec. I have recently been involved in a translation project. The goal was to translate the strings of an MVC framework to the Greek language. 70% of the language strings of the framework where translated, however 30% where intentionally left out. The decision was that we will not translate error messages aimed towards the developer of the application. The reasoning behind this (in short) was: are aimed towards designers/programmers. Programmers ( and even designers :) ) should have a basic understanding of English, at least enough so they can search on it on Google if they do not know what it means. (racist?) are aimed towards the developer and in a perfect world should not be displayed to the end user of the application as they concern the inner workings of the web application itself. i.e "You must set the database name in your database config file." and perhaps most importantly, they make the life of the developer harder when he tries to get more information/help regarding the error. For example the above error yields 8 results in Google (in quotes), whereas its Greek translation yields exactly 0. I know that this depends on the popularity of the target translation language and the application itself. For example I'm guessing that there are is vast amount of documentation regarding German SAP error messages (i know, i know, SAP IS German, but you get the point), as opposed to Greek Error Messages documentation regarding random application X which has about 500 installations worldwide. So to summarize: When you develop language translation packs for your applications do you translate error messages? Do you only do for predominant languages like English/Spanish/German/French? Or do you live them intact? I'm not looking for the "right" or "correct" answer, I'm looking for a "best-practices" answer, or if this problem is defined in any "official" standard/policy that you have had experience with.

    Read the article

  • How to use db4o IObjectContainer in a web application ? (Container lifetime ?)

    - by driis
    I am evaluating db4o for persistence for a ASP .NET MVC project. I am wondering how I should use the IObjectContainer in a web context with regards to object lifetime. As I see it, I can do one of the following: Create the IObjectContainer at application startup and keep the same instance for the entire application lifetime. Create one IObjectContainer per request. Start a server, and get a client IObjectContainer for each database interaction. What are the implications of these options, in terms of performance and concurrency ? Since the database is locked when an IObjectContainer is opened, I am pretty sure that option 2) would get me some problems with concurrency - would this also be the case for option 1 ? As I understand it, if I retrieve an object from an IObjectContainer, it must be saved by the same IObjectContainer instance - in order for db4o to identify it as being the same object. Therefore, If I choose option 3), I would have to retrieve the original object, make the necessary changes (copy data from a modified object), and then store it using the same IObjectContainer. Is this true ?

    Read the article

  • Migrate Data and Schema from MySQL to SQL Server

    - by colithium
    Are there any free solutions for automatically migrating a database from MySQL to SQL Server Server that "just works"? I've been attempting this simple (at least I thought so) task all day now. I've tried: SQL Server Management Studio's Import Data feature Create an empty database Tasks - Import Data... .NET Framework Data Provider for Odbc Valid DSN (verified it connects) Copy data from one or more tables or views Check 1 VERY simple table Click Preview Get Error: The preview data could not be retrieved. ADDITIONAL INFORMATION: ERROR [42000] [MySQL][ODBC 5.1 Driver][mysqld-5.1.45-community]You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '"table_name"' at line 1 (myodbc5.dll) A similar error occurs if I go through the rest of the wizard and perform the operation. The failed step is "Setting Source Connection" the error refers to retrieving column information and then lists the above error. It can retrieve column information just fine when I modify column mappings so I really don't know what the issue is. I've also tried getting various MySql tools to output ddl statements that SQL Server understand but haven't succeeded. I've tried with MySQL v5.1.11 to SQL Server 2005 and with MySQL v5.1.45 to SQL Server 2008 (with ODBC drivers 3.51.27.00 and 5.01.06.00 respectively)

    Read the article

  • Are there good reasons not to use an ORM?

    - by hangy
    During my apprenticeship, I have used NHibernate for some smaller projects which I mostly coded and designed on my own. Now, before starting some bigger project, the discussion arose how to design data access and whether or not to use an ORM layer. As I am still in my apprenticeship and still consider myself a beginner in enterprise programming, I did not really try to push in my opinion, which is that using an object relational mapper to the database can ease development quite a lot. The other coders in the development team are much more experienced than me, so I think I will just do what they say. :-) However, I do not completely understand two of the main reasons for not using NHibernate or a similar project: One can just build one’s own data access objects with SQL queries and copy those queries out of Microsoft SQL Server Management Studio. Debugging an ORM can be hard. So, of course I could just build my data access layer with a lot of SELECTs etc, but here I miss the advantage of automatic joins, lazy-loading proxy classes and a lower maintenance effort if a table gets a new column or a column gets renamed. (Updating numerous SELECT, INSERT and UPDATE queries vs. updating the mapping config and possibly refactoring the business classes and DTOs.) Also, using NHibernate you can run into unforeseen problems if you do not know the framework very well. That could be, for example, trusting the Table.hbm.xml where you set a string’s length to be automatically validated. However, I can also imagine similar bugs in a “simple” SqlConnection query based data access layer. Finally, are those arguments mentioned above really a good reason not to utilise an ORM for a non-trivial database based enterprise application? Are there probably other arguments they/I might have missed? (I should probably add that I think this is like the first “big” .NET/C# based application which will require teamwork. Good practices, which are seen as pretty normal on Stack Overflow, such as unit testing or continuous integration, are non-existing here up to now.)

    Read the article

  • Does new JUnit 4.8 @Category render test suites almost obsolete?

    - by grigory
    Given question 'How to run all tests belonging to a certain Category?' and the answer would the following approach be better for test organization? define master test suite that contains all tests (e.g. using ClasspathSuite) design sufficient set of JUnit categories (sufficient means that every desirable collection of sets is identifiable using one or more categories) define targeted test suites based on master test suite and set of categories For example: identify categories for speed (slow, fast), dependencies (mock, database, integration), function (), domain ( demand that each test is properly qualified (tagged) with relevant set of categories. create master test suite using ClasspathSuite (all tests found in classpath) create targeted suites by qualifying master test suite with categories, e.g. mock test suite, fast database test suite, slow integration for domain X test suite, etc. My question is more like soliciting approval rate for such approach vs. classic test suite approach. One unbeatable benefit is that every new test is immediately contained by relevant suites with no suite maintenance. One concern is proper categorization of each test.

    Read the article

  • In mysql, is "explain ..." always safe?

    - by tye
    If I allow a group of users to submit "explain $whatever" to mysql (via Perl's DBI using DBD::mysql), is there anything that a user could put into $whatever that would make any database changes, leak non-trivial information, or even cause significant database load? If so, how? I know that via "explain $whatever" one can figure out what tables / columns exist (you have to guess names, though) and roughly how many records are in a table or how many records have a particular value for an indexed field. I don't expect one to be able to get any information about the contents of unindexed fields. DBD::mysql should not allow multiple statements so I don't expect it to be possible to run any query (just explain one query). Even subqueries should not be executed, just explained. But I'm not a mysql expert and there are surely features of mysql that I'm not even aware of. In trying to come up with a query plan, might the optimizer actual execute an expression in order to come up with the value that an indexed field is going to be compared against? explain select * from atable where class = somefunction(...) where atable.class is indexed and not unique and class='unused' would find no records but class='common' would find a million records. Might 'explain' evaluate somefunction(...)? And then could somefunction(...) be written such that it modifies data?

    Read the article

  • N Tiers with SubSonic 3, Dirty Columns collection is alwayes empty on update

    - by Adel
    Hello guys here is what i am doing, and not working for me. I have a DAL generated with SubSonic 3 ActiveRecord template, i have a service layer (business layer if you well) that have mixture of facade and some validation. say i have a method on the Service layer like public void UpdateClient(Client client); in my GUI i create a Client object fill it with some data with ID and pass it to the service method and this never worked, the dirty columns collection (that track which columns are altered in order to use more efficant update statment) is alwayes empty. if i tried to get the object from database inside my GUI then pass it to the service method it's not working either. the only scenario i find working is if i query the object from the database and call Update() on the same context all inside my GUI and this defeats the whole service layer i've created. however for insert and delete everything working fine, i wonder if this have to do something with object tracking but what i know is SubSonic don't do that. please advice. thanks. Adel.

    Read the article

  • Integrated Windows authentication in IIS causing ADO.NET failure

    - by TrueWill
    We have a .NET 3.5 Web Service running under IIS. It must use identity impersonate="true" and Integrated Windows authentication in order to authenticate to third-party software. In addition, it connects to a SQL Server database using ADO.NET and SQL Server Authentication (specifying a fixed User ID and Password in the connection string). Everything worked fine until the database was moved to another SQL Server. Then the Web Service would throw the following exception: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) This error only occurs if identity impersonate is true in the Web.config. Again, the connection string hasn't changed and it specifies the user. I have tested the connection string and it works, both under the impersonated account and under the service account (and from both the remote machine and the server). What needs to be changed to get this to work with impersonation?

    Read the article

  • Function returning a class containing a function returning a class

    - by Scott
    I'm working on an object-oriented Excel add-in to retrieve information from our ERP system's database. Here is an example of a function call: itemDescription = Macola.Item("12345").Description Macola is an instance of a class which takes care of database access. Item() is a function of the Macola class which returns an instance of an ItemMaster class. Description() is a function of the ItemMaster class. This is all working correctly. Items can be be stored in more than one location, so my next step is to do this: quantityOnHand = Macola.Item("12345").Location("A1").QuantityOnHand Location() is a function of the ItemMaster class which returns an instance of the ItemLocation class (well, in theory anyway). QuantityOnHand() is a function of the ItemLocation class. But for some reason, the ItemLocation class is not even being intialized. Public Function Location(inventoryLocation As String) As ItemLocation Set Location = New ItemLocation Location.Item = item_no Location.Code = inventoryLocation End Function In the above sample, the variable item_no is a member variable of the ItemMaster class. Oddly enough, I can successfully instantiate the ItemLocation class outside of the ItemMaster class in a non-class module. Dim test As New ItemLocation test.Item = "12345" test.Code = "A1" quantityOnHand = test.QuantityOnHand Is there some way to make this work the way I want? I'm trying to keep the API as simple as possible. So that it only takes one line of code to retrieve a value.

    Read the article

  • ZF2 - How to use the Hydrator/exchangeArray() to populate a nested object

    - by Dominic Watson
    I've got an object with values that are stored in my database. My object also contains another object which is stored in the database using just the ID of it (foreign key). http://framework.zend.com/manual/2.0/en/modules/zend.stdlib.hydrator.html Before the Hydrator/exchangeArray functionality in ZF2 you would use a Mapper to grab everything you need to create the object. Now I'm trying to eliminate this extra layer by just using Hydration/exchangeArray to populate my objects but am a bit stuck on creating the nested object. Should my entity have the Inner object's table injected into it so I can create it if the ID of it is passed to my 'exchangeArray' ? Here are example entities as an example. // Village id, name, position, square_id // Map Square id, name, type Upon sending square_id to my Village's exchangeArray() function. It would get the mapTable and use hydrator to pull in the square using the ID I have. It doesn't seem right to be to have mapper instances inside my entity as I thought they should be disconnected from anything but it's own entity specific parameters and functionality?

    Read the article

  • How to optimize a postgreSQL server for a "write once, read many"-type infrastructure ?

    - by mhu
    Greetings, I am working on a piece of software that logs entries (and related tagging) in a PostgreSQL database for storage and retrieval. We never update any data once it has been inserted; we might remove it when the entry gets too old, but this is done at most once a day. Stored entries can be retrieved by users. The insertion of new entries can happen rather fast and regularly, thus the database will commonly hold several millions elements. The tables used are pretty simple : one table for ids, raw content and insertion date; and one table storing tags and their values associated to an id. User search mostly concern tags values, so SELECTs usually consist of JOIN queries on ids on the two tables. To sum it up : 2 tables Lots of INSERT no UPDATE some DELETE, once a day at most some user-generated SELECT with JOIN huge data set What would an optimal server configuration (software and hardware, I assume for example that RAID10 could help) be for my PostgreSQL server, given these requirements ? By optimal, I mean one that allows SELECT queries taking a reasonably little amount of time. I can provide more information about the current setup (like tables, indexes ...) if needed.

    Read the article

< Previous Page | 697 698 699 700 701 702 703 704 705 706 707 708  | Next Page >